Richard,
I don't think Shane and Marcus's overview of definitions-of-intelligence
is poor quality.
I think it is just doing something different than what you think it should be
doing.
The overview is exactly that: A review of what researchers have said about
the definition of intelligence.
Also, this would involve creating a close-knit community through
conferences, journals, common terminologies/ontologies, email lists,
articles, books, fellowships, collaborations, correspondence, research
institutes, doctoral programs, and other such devices. (Popularization is
not on the
Your job is to be diplomatic. Mine is to call a spade a spade. ;-)
Richard Loosemore
I would rephrase it like this: Your job is to make me look diplomatic ;-p
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
On definitions of intelligence, the canonical reference is
http://www.vetta.org/shane/intelligence.html
which lists 71 definitions. Apologies if someone already pointed out
Shane's page in this thread, I didn't read every message carefully.
An AGI definition of intelligence surely has, by
I'll be a lot more interested when people start creating NLP systems
that are syntactically and semantically processing statements about
words, sentences and other linguistic structures and adding syntactic
and semantic rules based on those sentences.
Depending on exactly what you mean by
On Jan 10, 2008 10:26 AM, William Pearson [EMAIL PROTECTED] wrote:
On 10/01/2008, Benjamin Goertzel [EMAIL PROTECTED] wrote:
I'll be a lot more interested when people start creating NLP systems
that are syntactically and semantically processing statements *about*
words, sentences
Hi,
Yes, the Texai implementation of Incremental Fluid Construction Grammar
follows the phrase structure approach in which leaf lexical constituents are
grouped into a structure (i.e. construction) hierarchy. Yet, because it is
incremental and thus cognitively plausible, it should scale to
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Benjamin Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, January 10, 2008 11:06:45 AM
Subject: Re: [agi] Incremental Fluid Construction
On Jan 10, 2008 10:03 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
All this discussion of building a grammar seems to ignore the obvious fact
that in humans, language learning is a continuous process that does not
require any explicit encoding of rules. I think either your model should
learn
And how would a young child or foreigner interpret on the Washington
Monument or shit list? Both are physical objects and a book *could* be
resting on them.
Sorry, my shit list is purely mental in nature ;-) ... at the moment, I maintain
a task list but not a shit list... maybe I need to
,you)
Note that the RelEx output is already abstracted
and semantified compared to what comes out of
a grammar parser.
-- Ben
On Jan 9, 2008 5:59 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Can you give about ten examples of rules? (That would answer a lot of my
Can you give about ten examples of rules? (That would answer a lot of my
questions above)
That would just lead to really long list of questions that I don't have time to
answer right now
In a month or two, we'll write a paper on the rule-encoding approach we're
using, and I'll post it to
Processing a dictionary in a useful way
requires quite sophisticated language understanding ability, though.
Once you can do that, the hard part of the problem is already
solved ;-)
Ben
On Jan 9, 2008 7:22 PM, William Pearson [EMAIL PROTECTED] wrote:
On 09/01/2008, Benjamin Goertzel [EMAIL
On Jan 7, 2008 9:12 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Robert,
Look, the basic reality is that computers have NOT yet been creative in any
significant way, and have NOT yet achieved AGI - general intelligence, - or
indeed any significant rulebreaking adaptivity; (If you disagree,
On Jan 7, 2008 12:08 PM, David Butler [EMAIL PROTECTED] wrote:
Would two AGI's with the same initial learning program, same hardware in a
controlled environment (same access to a specific learning base-something
like an encyclopedia) learn at different rates and excel in different tasks?
Yes
Nothing of that nature is planned at present ... as we the conference organizers
are rather busy with other stuff, we've been pretty much fully whelmed with the
organization of the First Life conference...
It might be fun to do an in-world AGI meet-up a couple weeks after AGI-08, with
an aim of
I'll forward this request to those who will be handling such things...
thx
ben
On Jan 7, 2008 3:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Ben,
I'm certainly not in position to ask for it, but if it's possible, can
some kind of microphones be used during presentations on agi-08 (if
On Jan 5, 2008 10:52 PM, Mike Tintner [EMAIL PROTECTED] wrote:
I think I've found a simple test of cog. sci.
I take the basic premise of cog. sci. to be that the human mind - and
therefore its every activity, or sequence of action - is programmed.
No. This is one perspective taken by some
I don't really understand what you mean by programmed ... nor by creative
You say that, according to your definitions, a GA is programmed and
ergo cannot be creative...
How about, for instance, a computer simulation of a human brain? That
would be operated via program code, hence it would be
On Jan 6, 2008 4:00 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben,
Sounds like you may have missed the whole point of the test - though I mean
no negative comment by that - it's all a question of communication.
A *program* is a prior series or set of instructions that shapes and
determines
Mike,
The short answer is that I don't believe that computer *programs* can be
creative in the hard sense, because they presuppose a line of enquiry, a
predetermined approach to a problem -
...
But I see no reason why computers couldn't be briefed rather than
programmed, and freely associate
If you believe in principle that no digital computer program can ever
be creative, then there's no point in me or anyone else rambling on at
length about their own particular approach to digital-computer-program
creativity...
One question I have is whether you would be convinced that digital
Matt,
I agree w/ your question...
I actually think KB's can be useful in principle, but I think they
need to be developed
in a pragmatic way, i.e. where each item of knowledge added can be validated via
how useful it is for helping a functional intelligent agent to achieve
some interesting
On Dec 28, 2007 5:59 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
OpenCog is definitely a positive thing to happen in the AGI scene. It's
been all vaporware so far.
Yes, it's all vaporware so far ;-)
On the other hand, the code we hope to release as part of OpenCog actually
exists, but
On Dec 28, 2007 8:28 AM, Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
I wish you much luck with your own approach And, I would imagine
that if you create a software framework supporting your own approach
in a convenient way, my own currently favored AI
Loosemore wrote:
I am sorry, but I have reservations about the OpenCog project.
The problem of building an open-source AI needs a framework-level tool
that is specifically designed to allow a wide variety of architectures
to be described and expressed.
OpenCog, as far as I can see, does not
Re the recent discussion of OpenCog -- this recent post I made
to the OpenCog mailing list may perhaps help clarify the
intentions underlying the project further.
-- Ben
-- Forwarded message --
From: Benjamin Goertzel [EMAIL PROTECTED]
Date: Dec 27, 2007 11:07 AM
Subject: Re
I think that at first sight this goes to support my position in the original
argument with Ben- namely that there are all kinds of ways to get at or read
minds, and there is now an increasing momentum to do that.
Being able to read the stream of subvocalizations coming out from a person's
Hi all,
Here you'll find a paper
http://goertzel.org/new_research/WCCI_AGI.pdf
that I've submitted to the WCCI 2008 Special Session
on Human-level AI.
It tries to summarize the big picture about how advanced
AI can be achieved via synthesizing NLP and virtual embodiment...
The paper refers to
For those interested in automated theorem-proving,
I'm pleased to announce a major advance in tools
has occurred...
The Mizar library of formalized math has finally
been translated into a sensible format, usable
for training automated theorem-proving systems ;-)
Josef Urban informed me that
I would add that the Chinese universities are extremely eager to
recruit Western professors to lead research labs in AI and other
areas.
Hugo DeGaris relocated there a year or so ago, and is quite relieved
to be supplied with a bunch of excellent research assistants and loads
of computational
Mike,
My comment is that this is GREAT research and development, but, for
the near and probably medium future is very likely to be about perception
and action rather than cognition.
I.e., we are sort of on the verge of understanding how to hook up new
sensors to the brain, and hook the brain up
Mike wrote:
Personally, my guess is that serious mindreading machines will be a reality
in the not too distant future - before AGI and seriously autonomous mobile
robots.
No way.
Tell that to the neuroscientists in your local university neuro lab, and they'll
get a good laugh ;-)
The
Hi,
From Bob Mottram on the AGI list:
However, I'm not expecting to see the widespread cyborgisation of
human society any time soon. As the article suggests the first
generation implants are all devices to fulfill some well defined
medical need, and will have to go through all the usual
Bear in mind that science has used very little imagination here to date.
Science only started studying consciousness ten years ago. It still hasn't
started studying Thought - the actual contents of consciousness: the
streams of thought inside people's heads. In both cases, the reason has been
Mike:
Making the general public smarter is not in the best interest of
government, who wants to keep us fat dumb and (relatively) happy
(read: distracted).
If we're not making people smarter with currently available resources,
why would we invest in research to discover expensive new
Is China pushing its people into being smarter? Are they giving
incentives beyond the US-style capitalist reasons for being smart?
The incentive is that if you get smart enough, you may figure out a way
to get out of China ;-)
Thus, they let the top .01% out, so as to keep the rest of the
, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Is China pushing its people into being smarter? Are they giving
incentives beyond the US-style capitalist reasons for being smart?
The incentive is that if you get smart enough, you may figure out a way
to get out of China ;-)
Thus
Mike
In case you're curious I wrote down my theory of
emotions here
http://www.goertzel.org/dynapsyc/2004/Emotions.htm
(an early version of text that later became a chapter in The
Hidden Pattern)
Among the conclusions my theory of emotions leads to are, as stated there:
*
* AI systems
Self-organizing complexity and computational complexity
are quite separate technical uses of the word complexity, though I
do think there
are subtle relationships.
As an example of a relationship btw the two kinds of complexity, look
at Crutchfield's
work on using formal languages to model the
Thanks Bob. But I meant, it looks more likely that robots will achieve - and
have already taken the first concrete steps to achieve - the goals of AGI -
the capacity to learn a range of abilities and activities.
Can you point to any single robot that has demonstrated the capability to
learn a
Yes I expect to see more narrow AI robotics in future, but as time
goes on there will be pressures to consolidate multiple abilities into
a single machine. Ergonomics dictates that people will only accept a
limited number of mobile robots in their homes or work spaces.
Physical space is at a
So I reckon roboticists ARE actually focussed on an AGI challenge - whereas,
as I've pointed out before, there is nothing comparable in pure AGI.
To my knowledge, none of the work on the ICRA Robotic Challenge is at
this point taking a strong AGI approach
And
with all those millions of
On Dec 7, 2007 7:09 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Matt,:AGI research needs
special hardware with massive computational capabilities.
Could you give an example or two of the kind of problems that your AGI
system(s) will need such massive capabilities to solve? It's so good -
On Dec 7, 2007 10:21 AM, Bob Mottram [EMAIL PROTECTED] wrote:
If I had 100 of the highest specification PCs on my desktop today (and
it would be a big desk!) linked via a high speed network this wouldn't
help me all that much. Provided that I had the right knowledge I
think I could produce a
On Dec 6, 2007 8:06 PM, Ed Porter [EMAIL PROTECTED] wrote:
Ben,
To the extent it is not proprietary, could you please list some of the types
of parameters that have to be tuned, and the types, if any, of
Loosemore-type complexity problems you envision in Novamente or have
experienced with
Clearly the brain works VASTLY differently and more efficiently than current
computers - are you seriously disputing that?
It is very clear that in many respects the brain is much less efficient than
current digital computers and software.
It is more energy-efficient by and large, as Read
On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben: To publish your ideas
in academic journals, you need to ground them in the existing research
literature,
not in your own personal introspective observations.
Big mistake. Think what would have happened if Freud had
There is no doubt that complexity, in the sense typically used in
dynamical-systems-theory, presents a major issue for AGI systems. Any
AGI system with real potential is bound to have a lot of parameters
with complex interdependencies between them, and tuning these
parameters is going to be a
Conclusion: there is a danger that the complexity that even Ben agrees
must be present in AGI systems will have a significant impact on our
efforts to build them. But the only response to this danger at the
moment is the bare statement made by people like Ben that I do not
think that the
Show me ONE other example of the reverse engineering of a system in
which the low level mechanisms show as many complexity-generating
characteristics as are found in the case of intelligent systems, and I
will gladly learn from the experience of the team that did the job.
I do not believe
Tintner wrote:
Your paper represents almost a literal application of the idea that
creativity is ingenious/lateral. Hey it's no trick to be just
ingenious/lateral or fantastic.
Ah ... before creativity was what was lacking. But now you're shifting
arguments and it's something else that is
More generally, I don't perceive any readiness to recognize that the brain
has the answers to all the many unsolved problems of AGI -
Obviously the brain contains answers to many of the unsolved problems of
AGI (not all -- e.g. not the problem of how to create a stable goal system
under
On Dec 4, 2007 8:38 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
[snip]
And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI. The closest
thing to such an argument that I've seen
Thus: building a NL parser, no matter how good it is, is of no use
whatsoever unless it can be shown to emerge from (or at least fit with)
a learning mechanism that allows the system itself to generate its own
understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A
MECHANISM
Richard,
Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!
I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)
The argument I presented was not a conjectural assertion, it made the
following coherent case:
What makes anyone think OpenCog will be different? Is it more
understandable? Will there be long-term aficionados who write
books on how to build systems in OpenCog? Will the developers
have experience, or just adolescent enthusiasm? I'm watching
the experiment to find out.
Well, OpenCog
OK, understood...
On Dec 4, 2007 9:32 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
Thus: building a NL parser, no matter how good it is, is of no use
whatsoever unless it can be shown to emerge from (or at least fit with)
a learning mechanism that allows
On Nov 30, 2007 7:57 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben: It seems to take tots a damn lot of trials to learn basic skills
Sure. My point is partly that human learning must be pretty quantifiable in
terms of number of times a given action is practised,
Definitely NOT ... it's very
Yeah, I've been following that for a while. There are some very smart
people involved, and it's quite possible they'll make a useful
software tool, but I don't feel they have a really viable unified
cognitive architecture. It's the sort of architecture where different
components are written in
[What related principles govern the Novamente's figure's trial and error
learning of how to pick up a ball?]
Pure trial and error learning is really slow though... we are now
relying on a combination of
-- reinforcement from a teacher
-- imitation of others' behavior
-- trial and error
--
On Nov 29, 2007 11:35 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Presumably, human learning isn't that slow though - if you simply count the
number of attempts made before any given movement is mastered at a basic
level (.e.g crawling/ walking/ grasping/ tennis forehand etc)? My guess
would be
On Nov 30, 2007 12:03 AM, Dennis Gorelik [EMAIL PROTECTED] wrote:
Benjamin,
That proves my point [that AGI project can be successfully split
into smaller narrow AI subprojects], right?
Yes, but it's a largely irrelevant point. Because building a narrow-AI
system in an AGI-compatible
So far only researchers/developers who picked narrow-AI approach
accomplished something useful for AGI.
E.g.: Google, computer languages, network protocols, databases.
These are tools that are useful for AGI RD but so are computer
monitors, silicon chips, and desk chairs. Being a useful tool
EDI must admit, I have never heard cortical column described as
containing 10^5 neurons. The figure I have commonly seen is 10^2 neurons
for a cortical column, although my understanding is that the actual number
could be either less or more. I guess the 10^5 figure would relate to
so-called
Still, this is the most
resource-intensive part of
the Novamente system (the part that's most likely to require
supercomputers to
achieve human-level AI).
Why is it the most resource intensive, is it the evolutionary computational
cost? Is this where MOSES is used?
Correct, this is
Nearly any AGI component can be used within a narrow AI,
That proves my point [that AGI project can be successfully split
into smaller narrow AI subprojects], right?
Yes, but it's a largely irrelevant point. Because building a narrow-AI
system in an AGI-compatible way is HARDER than
My claim is that it's possible [and necessary] to split massive amount
of work that has to be done for AGI into smaller narrow AI chunks in
such a way that every narrow AI chunk has it's own business meaning
and can pay for itself.
You have not addressed my claim, which has massive evidence
Well, there is a discipline of computer science devoted to automatic
programming, i.e. synthesizing software based on specifications of
desired functionality.
State of the art is:
-- Just barely, researchers have recently gotten automated
program learning to synthesize an nlogn sorting algorithm
Linas:
I find it telling that no one is saying I've got the code, I just need to
scale it up
1000-fold to make it impressive ...
Yes, that's an accurate comment. Novamente will hopefully reach that
point in a few years.
For now, we will need (and use) a lotta machines for commercial product
Cassimatis's system is an interesting research system ... it doesn't yet have
lotsa demonstrated practical functionality, if that's what you mean by
work...
He wants to take a bunch of disparately-functioning agents, and hook
them together
into a common framework using a common logical
Are you asking for success stories regarding research funding in any
domain,
or regarding research funding in AGI?
Any domain, please.
OK, so your suggestion is that research funding, in itself, is worthless in
any domain?
I don't really have time to pursue this kind of silly
No.
My point is that massive funding without having a prototype prior to
funding is worthless most of the times.
If prototype cannot be created at reasonably low cost then fully working
product
most likely cannot be created even with massive funding.
Well, this seems to dissolve into a
On Nov 20, 2007 11:22 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:
Jiri,
AGI is IMO possible now but requires very different approach than narrow
AI.
AGI requires properly tune some existing narrow AI technologies,
combine them together and may be add couple of more.
That's massive
Could you describe a piece of technology that simultaneously:
- Is required for AGI.
- Cannot be required part of any useful narrow AI.
The key to your statement is the word required
Nearly any AGI component can be used within a narrow AI, but, the problem
is,
it's usually a bunch easier
On Nov 18, 2007 12:50 AM, Dennis Gorelik [EMAIL PROTECTED] wrote:
Benjamin,
Do you have any success stories of such research funding in the last
20 years?
Something that resulted in useful accomplishments.
Are you asking for success stories regarding research funding in any domain,
or
Novamente as a whole is definitely a research project, albeit one with
a very well fleshed out research plan. I have a strong hypothesis about
how the project will come out, and arguments in favor of this hypothesis;
but I don't have the level of confidence I'd have in, say, the stability
of a
Proactively minimizing risk in as
many areas as possible make a venture much more salable, but most AI
ventures tend to be very apparently risky at many levels that have no
relation to the AI research per se and the inability of these
ventures to minimize all that unnecessary risk is a giant
I have not heard a *creative* new idea
here that directly addresses and shows the power to solve even in part the
problem of creating general intelligence.
To be quite frank, the most creative and original ideas inside the Novamente
design are quite technical. I suspect you don't have the
On Nov 18, 2007 6:45 PM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Ben,
Have you already considered what form of multi-agent epistemic logic
(or whatever extension to PLN) Novamente will use to merge knowledge
from different avatars?
Well, standard PLN handles this in principle via
Hi,
The majority of VC's do, as you say, want a technology that is sewn up,
from the point of view of technical feasibility. But this is not always
true. There is always a gray area at the fringe of feasibility where
the last set of questions has not been *fully* answered before money is
:
On Nov 18, 2007, at 7:06 PM, Benjamin Goertzel wrote:
Navigating complex social and business situations requires a quite
different set of capabilities than creating AGI. Potentially they
could
be combined in the same person, but one certainly can't assume that
would be the case.
I
On Nov 18, 2007 11:24 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
There are a lot of worthwhile points in your post, and a number of things
I don't fully agree with, but I don't have time to argue them all right
now...
Instead I'll just pick two points:
er, looks like that was three
On Nov 17, 2007 1:08 PM, Dennis Gorelik [EMAIL PROTECTED] wrote:
Jiri,
Give $1 for the research to who?
Research team can easily eat millions $$$ without producing any useful
results.
If you just randomly pick researchers for investment, your chances to
get any useful outcome from the
Richard,
Though we have theoretical disagreements, I largely agree with your
analysis of the value of prototypes for AGI.
Experience has shown repeatedly that prototypes displaying apparently
intelligent behavior in various domains are very frequently dead-ends,
because they embody various sorts
About PolyWorld and Alife in general...
I remember playing with PolyWorld 10 years ago or so And, I had a grad
student at Uni. of Western Australia build a similar system, back in my
Perth days... (it was called SEE, for Simple Evolving Ecology. We never
published anything on it, as I left
I think that linguistic interaction with human beings is going to be what
lifts Second Life proto-AGI's beyond the glass ceiling...
Our first SL agents won't have language generation or language learning
capability, but I think that introducing it is really essential, esp. given
the limitations
On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
On Thursday 15 November 2007 08:16, Benjamin Goertzel wrote:
non-brain-based AGI. After all it's not like we know how real
chemistry gives rise to real biology yet --- the dynamics underlying
protein-folding remain ill
in ways no one understands
really.. etc. etc. etc. ;-)
ben
On Nov 15, 2007 10:07 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
On Thursday 15 November 2007 20:02, Benjamin Goertzel wrote:
On Nov 15, 2007 8:57 PM, Bryan Bishop [EMAIL PROTECTED] wrote:
Can anybody elaborate on the actual problems
Hi,
No: the real concept of lack of grounding is nothing so simple as the
way you are using the word grounding.
Lack of grounding makes an AGI fall flat on its face and not work.
I can't summarize the grounding literature in one post. (Though, heck,
I have actually tried to do that in
Richard,
So here I am, looking at this situation, and I see:
AGI system intepretation (implicit in system use of it)
Human programmer intepretation
and I ask myself which one of these is the real interpretation?
It matters, because they do not necessarily match up.
That
On Nov 14, 2007 1:36 PM, Mike Tintner [EMAIL PROTECTED] wrote:
RL:In order to completely ground the system, you need to let the system
build its own symbols
Correct. Novamente is designed to be able to build its own symbols.
what is built-in, are mechanisms for building symbols, and for
Richard,
I recently saw a talk by Todd Huffman at the Foresight Unconference on the
topic of
mind uploading technology, and he was specifically showing off techniques
for imaging slices of brain, that *do* give the level of biological detail
you're thinking of. Topics of discussions were, for
or b) too fragile
to succeed -- particularly since I'm pretty sure that you couldn't
convince me without making some serious additions to Novamente
- Original Message -
*From:* Benjamin Goertzel mailto:[EMAIL PROTECTED]
*To:* agi@v2.listbox.com mailto:agi@v2
For example, what is the equivalent of the activation control (or search)
algorithm in Google sets. They operate over huge data. I bet the
algorithm for calculating their search or activation is relatively simple
(much, much, much less than a PhD theses) and look what they can do. So I
Yes, I thought I had heard of people trying more ambitious techniques,
but in the cases I heard of (can't remember where now) the tradeoffs
always left the approach hanging on one of the issues: for example, was
he talking about scanning microchondrial activity in vivo, in real time,
On Nov 13, 2007 2:37 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Ben,
Unfortunately what you say below is tangential to my point, which is
what happens when you reach the stage where you cannot allow any more
vagueness or subjective interpretation of the qualifiers, because you
have to
This is the thing that I think is relevent to Robin Hanson's original
question. I think we can build 1+2 is short order, and maybe 3 in a
while longer. But the result of 1+2+3 will almost surely be an
idiot-savant: knows everything about horses, and can talk about them
at length, but, like
But has a human, asking Wen out on a date, I don't really know what
Wen likes cats ever really meant. It neither prevents me from talking
to Wen, or from telling my best buddy that ...well, I know, for
instance, that she likes cats...
yes, exactly...
The NLP statement Wen likes cats is
So, vagueness can not only be important
imported, I meant
into an AI system from natural language,
but also propagated around the AI system via inference.
This is NOT one of the trickier things about building probabilistic AGI,
it's really
kind of elementary...
-- Ben G
-
1 - 100 of 349 matches
Mail list logo