Bob,
The two biologists I know who are deep into mind uploading
(Randal Koene and Todd Huffman) both agree with your basic assessment,
I believe...
ben g
On Nov 13, 2007 4:37 PM, Bob Mottram [EMAIL PROTECTED] wrote:
It seems quite possible that what we need is a detailed map of every
Hi,
Research project 1. How do you find analogies between neural networks,
enzyme kinetics and the formation of galaxies (hint: think Boltzmann)?
That is a question most humans couldn't answer, and is only suitable for
testing an AGI that is already very advanced.
In your opinion. I
That's a good simple, starting case. But how do you decide how much
knowledge to disburse? How do you know what is irrelevant? How much do
your answers differ between a small farmer in New Zealand, a rodeo rider in
the West, a veterinarian is Pennsylvania, a child in Washington, a
On Nov 12, 2007 1:49 PM, Mark Waser [EMAIL PROTECTED] wrote:
I'm more interested at this stage in analogies like
-- btw seeking food and seeking understanding
-- between getting an object out of a hole and getting an object out of
a pocket, or a guarded room
Why would one need to
I actually know that author pretty well (Kent Palmer); we met F2F and I
learned
a lot of Eastern and Middle-Eastern philosophy from the guy.
He is a good software designer, in his day job. In his philosophical
writings, however, he is not a scientist.
His writing can be interesting if you're
On Nov 12, 2007 2:51 PM, Mark Waser [EMAIL PROTECTED] wrote:
I don't know at what point you'll be blocked from answering by
confidentiality concerns
I can't say much more than I will do in this email, due to customer
confidentiality concerns
but I'll ask a few questions you hopefully can
On Nov 12, 2007 2:41 PM, Mark Waser [EMAIL PROTECTED] wrote:
It is NOT clear that Novamente documentation is NOT enabling, or could
not be made enabling, with, say, one man year of work. Strong argument
could be made both ways.
I believe that Ben would argue that Novamente
I am heavily focussed on my own design at the moment, but when you talk
about the need for 100+ hours of studying detailed NM materials, are you
talking about publicly available documents, or proprietary information?
Proprietary info, much of which may be made public next year, though...
On Nov 12, 2007 8:44 PM, Mark Waser [EMAIL PROTECTED] wrote:
I don't think BenG claimed to be able to build an AGI in 6 months,
but rather something that can fake it for a breif period of time.
I was rising to the defense of that.
No. Ben is honest in his claims and he said that this was
But we do not yet have a complete, verifiable theory, let alone a
practical design.
- Jef
To be more accurate, we don't have a practical design that is commonly
accepted in the AGI research community.
I believe that I *do* have a practical design for AGI and I am working hard
toward
Richard,
Even Ben Goertzel, in a recent comment, said something to the effect
that the only good reason to believe that his model is going to function
as advertised is that *when* it is working we will be able to see that
it really does work:
The above paragraph is a distortion of what I
Richard,
Thus: if someone wanted volunteers to fly in their brand-new aircraft
design, but all they could do to reassure people that it was going to
work were the intuitions of suitably trained individuals, then most
rational people would refuse to fly - they would want more than
Would it be possible to set up your program so that EVERY concept is open
to
doubt - open to being changed radically in meaning as new examples come
along?
That, it would seem, is how the human brain operates.
Both Novamente and NARS work that way. Cyc, for example, does not.
-- Ben G
Robin,
To add onto Edward Porter's excellent summary, I would note the
considerable power that virtual worlds technology has to accelerate
advancement towards AGI, as I argued in a recent article on
KurzweilAI.net (and another recent article to appear in AI Journal
shortly)
AI Meets the
Maybe it would be easy
to rip out Cyc's upper ontology, and replace it by SUMO's,
or v.v. I don't know ... I suspect its not, and that bothers
me; that is a bit an important problem.
It would *not* be easy to do so, and this is a significant problem...
IMO, the whole approach of building
Is there any research that can tell us what kind of structures are better
for machine learning? Or perhaps w.r.t a certain type of data? Are there
learning structures that will somehow learn things faster?
There is plenty of knowledge about which learning algorithms are better for
which
Hi all,
Just a reminder that we are soliciting papers on
Sociocultural, Ethical and Futurological Implications of Artificial General
Intelligence
to be presented at a workshop following the AGI-08 conference in Memphis
(US) in March.
http://www.agi-08.org/workshop/
The submission deadline is
I think that if it were dumb enough that it could be treated as a tool,
then it would have to no be able to understand that it was being used as
a tool.
And if it could not understand that, it would just not have any hope of
being generally intelligent.
You seem to be assuming this
Jiri,
IMO, proceeding with AGI development using formal-language input
rather than NL input is **not** necessarily a bad approach.
However, one downside is that your incremental steps toward AGI, in
this approach, will not be very convincing to skeptics.
Another downside is that in this
On Oct 30, 2007 4:59 AM, Joshua Fox [EMAIL PROTECTED] wrote:
Surely Marvin Minsky -- a top MIT professor, with a world-beating
reputation in multiple fields -- can snap his fingers and get all the
required funding, whether commercial or non-profit, for AGI projects which
he initiates or
I love Deb Roy and think his work is wonderful, but one thing he does NOT
have is a coherent design for an AGI ...
As I understand it, what he's doing now is aimed at gathering loads of
speech data, for later analysis...
His prior work on robotics and symbol grounding was also really cool, but
So it validates that a childrens' Turing Test starting at age 5 is not a
stupid or unworkable idea.
Of course, the extent to which is is a *valuable* idea is another story ...
;-)
-- Ben G
On Oct 29, 2007 11:06 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Can anyone find that paper online
However, the Principle of Computational Equivalence is really kind of
useless, because it doesn't take into account computational complexity...
Yes, you can probably simulate any complex systems phenomenon using
a CA. But, with what pragmatic penalty in terms of computational cost?
Some may be
Check out the Forum site at AGIRI.org
It exists, it works, it's used occasionally ... but for whatever reason,
this email list gets a lot more traffic...
On the other hand, the ImmInst.org fora are quite intensely used ... I'm not
sure why the situation is so different in that context...
-- Ben
Interesting...
Indeed, this is something we wanted to do in AGISim, but didn't get to
yet...
Furthermore, this is the sort of thing that is irritatingly difficult to do
in Second Life, because of the way avatar movements work in SL... (thru the
external API you don't get information on bone
And I really am not seeing any difference between what I understand as
your opinion and what I understand as his.
Sorry if I seemed to be hammering on anyone, it wasn't my intention.
(Yesterday was a sort of bad day for me for non-science-related reasons, so
my tone of e-voice was likely
On 10/22/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Monday 22 October 2007 08:05:26 am, Benjamin Goertzel wrote:
... but dynamic long-term memory, in my view, is a wildly
self-organizing mess, and would best be modeled algebraically as a
quadratic
iteration over a high-dimensional
About the Granger paper, I thought last night of a concise summary of
how bad it really is. Imagine that we had not invented computers, but
we were suddenly given a batch of computers by some aliens, and we tried
to put together a science to understand how these machines worked.
Suppose,
On 10/22/07, Mark Waser [EMAIL PROTECTED] wrote:
-- I think Granger's cog-sci speculations, while oversimplified and
surely wrong in parts, contain important hints at the truth (and in my prior
email I tried to indicate how)
-- Richard OTOH, seems to consider Granger's cog-sci speculations
Granger has nothing new in cog sci except some of the particular details
in b) -- which you find uncompelling and oversimplified -- so what is the
cog sci that you find of value?
--
Apparently we are using cog sci in slightly different ways...
I agree that he
But each of these things has a huge raft of assumptions built into it:
-- hierarchical clustering ... OF WHAT KIND OF SYMBOLS?
-- hash coding ... OF WHAT KIND OF SYMBOLS?
-- sequence completion ... OF WHAT KIND OF SYMBOLS?
In each case, Granger's answer is that the symbols are
On 10/22/07, Mark Waser [EMAIL PROTECTED] wrote:
I think we've beaten this horse to death . . . . :-)
However, he has some interesting ideas about the connections between
cognitive primitives and neurological structures/dynamics. Connections of
this nature are IMO cog sci rather than
As I said above, it leaves many things unsaid and unclear. For example,
does it activate all or multiple nodes in a cluster together or not? Does
it always activate the most general cluster covering a given pattern, or
does it use some measure of how well a cluster fits input to select
Some semi-organized responses to points raised in this thread...
1) About spatial maps...
It seems to be the case that the brain uses spatial maps a lot, which
abstract
considerably from the territory they represent
Similarly in Novamente we have a spatial map data structure which has an
On 10/21/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Ben,
Good Post
I my mind the ability to map each of N things into a model of a space is a
very valuable thing. It lets us represent all of the N^2 spatial
relationships between those N things based on just N mappings. This is
Loosemore wrote:
Edward
If I were you, I would not get too excited about this paper, nor others
of this sort (see, e.g. Granger's other general brain-engineering paper
at http://www.dartmouth.edu/~rhg/pubs/RHGai50.pdf).
This kind of research comes pretty close to something that deserves
About NARS... Nesov/Wang dialogued:
Why do you need so many rules?
I didn't expect so many rules myself at the beginning. I add new rules
only when the existing ones are not enough for a situation. It will be
great if someone can find a simpler design.
I feel that some of complexity
The questions you ask are not worth asking, because you cannot do
anything with a 'theory' (Granger's) that consists of a bunch of vague
assertions about various outdated, broken cognitive ideas, asserted
without justification.
Richard Loosemore
Richard, you haven't convinced me, but I
On 10/21/07, Pei Wang [EMAIL PROTECTED] wrote:
The difference between NARS and PLN has much more to do with their
different semantics, than with their different logical/algebraic
formalism.
Sure; in both cases, the algebraic structure of the rules and the
truth-value formulas follow from the
It took me at least five years of struggle to get to the point where I
could start to have the confidence to call a spade a spade, and dismiss
stuff that looked like rubbish.
Now, you say we have to forgive academics for doing this? The hell we
do.
If I see garbage being peddled as if
And you are also not above making patronizing remarks in which you
implicitly refer to someone as behaving in a simian -- i.e.
monkey-like manner.
Hey, I'm a monkey too -- and I'm pretty tired of being one. Let's bring on
the
Singularity already!!!
If you read the paper I just wrote,
:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
*Sent:* Saturday, October 20, 2007 4:01 PM
*To:* agi@v2.listbox.com
*Subject:* Re: [agi] An AGI Test/Prize
On 10/20/07, Edward W. Porter [EMAIL PROTECTED] wrote:
John,
So rather than a definition of intelligence you want a recipe for how
?act=STf=21t=23
-- Ben
-Original Message-
*From:* Benjamin Goertzel [mailto:[EMAIL PROTECTED]
*Sent:* Saturday, October 20, 2007 5:24 PM
*To:* agi@v2.listbox.com
*Subject:* Re: [agi] An AGI Test/Prize
Ah, gotcha...
The recent book Advances in Artificial General Intelligence gives
Well, one problem is that the current mathematical definition of general
intelligence
is exactly that -- a definition of totally general intelligence, which is
unachievable
by any finite-resources AGI system...
On the other hand, IQ tests and such measure domain-specific capabiities as
much
as
On 10/18/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I'd be interested in everyone's take on the following:
1. What is the single biggest technical gap between current AI and AGI? (
e.g.
we need a way to do X or we just need more development of Y or we have the
ideas, just need
I guess, off the top of my head, the conversational equivalent might be a
Story Challenge - asking your AGI to tell some explanatory story about a
problem that had occurred to it recently, (designated by the tester), and
then perhaps asking it to devise a solution. Just my first thought -
1. What is the single biggest technical gap between current AI and AGI?
(e.g.
we need a way to do X or we just need more development of Y or we have
the
ideas, just need hardware, etc)
The biggest gap is the design of a system that can absorb information
generated by other
That's where I think
narrow Assistive Intelligence could add the sender's assumed context
to a neutral exchange format that the receiver's agent could properly
display in an unencumbered way. The only way I see for that to happen
is that the agents are trained on/around the unique core
The trivial sense of
semantics don't apply, and the deeper senses are so vague that they
are almost synonymous with grounding.
Completely wrong. Grounding is a fairly shallow concept that falls apart
as an
explanation of meaning under fairly moderate scrutiny. Semantics is, by
On 10/11/07, Mike Tintner [EMAIL PROTECTED] wrote:
Edward,
Thanks for interesting info - but if I may press you once more. You talk
of different systems, but you don't give one specific example of the kind of
useful ( significant for AGI) inferences any of them can produce -as I do
with my
Sounds like a good analogy. If it can play fetch, it can play
hide-and-seek. [And exactly the sort of thing that a true AGI must do -
absolute heart of AGI].
The question, wh. I wouldn't think that complex to answer, is: how did it
connect the action/activity of fetch, to the activity of
On 10/12/07, Mike Tintner [EMAIL PROTECTED] wrote:
Ben,
No. Everything is grounded. This is a huge subject. Perhaps you should
read:
Where Mathematics Comes From, written by George Lakoff and Rafael Nunez,
You really do need to know about Lakoff/Fauconnier/Mark Johnson/Mark
Turner.
Nor BTW are am I arguing at all against symbols, (you might care to look
at the Picture Tree thread I started a few months ago to better understand
my thinking here) - the brain (and any true AGI, I believe) uses symbols,
outline graphics [or Johnson's image schemata] and images in parallel,
On 10/12/07, a [EMAIL PROTECTED] wrote:
Mathematician-level mathematics must be visually grounded. Without
groundedness, simplified and expanded forms of expressions are the same,
so there is no motive to simplify. If it is not visually grounded, then
it will only reach the level of the top
P.S. You will find, I suggest, - and this is of extreme importance here -
that all of people's difficulties in understanding - their confusion about
- maths, lies in their brain's failure to ground/ make sense of the various
forms. Their difficulties with fractions, calculus, integration,
Well, it's hard to put into words what I do in my head when I do
mathematics... it probably does use visual cortex in some way, but's
not visually manipulating mathematical expressions nor using visual
metaphors...
I can completely describe. I completely do mathematics by visually
On 10/12/07, a [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
So then you're reduced to arguing that mathematicians who don't feel
like they're visualizing when they prove things, are somehow
unconsciously doing so.
I meant visually manipulating mathematical expressions.
Well
Hi all,
Novamente LLC is looking to fill an open AI Software Engineer position.
Our job ad is attached (it will be placed on the website soon).
Qualified and interested applicants should send a resume and cover
letter to me at [EMAIL PROTECTED] However, please read the ad
carefully first to be
Within Novamente LLC, we currently have 11 people currently working on
projects directly related to AGI. But about half of those are working on
infrastructural stuff, the other half on more directly AI coding. And our
team's dedication to AGI-ish versus commercial work has fluctuated over time
narrow AI and AGI is going to be a fuzzy
thing...
ben
On 8/2/07, Nathan Cook [EMAIL PROTECTED] wrote:
On 02/08/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
(as the Novamente team is very aware these days, as we're involved
with hooking a very limited subset of the Novamente Cognition
Mike,
The value of this sort of learning is one of the reasons why I'm so excited
about
rolling out AI systems as virtually embodied agents in virtual worlds...
So, I agree that traditionally AI systems have not utilized this kind of
active learning,
but I think that they should, and that there
I believe I mentioned before, on this list, that the proceedings of the 2006
AGI workshop
have been published, by IOS Press
http://www.amazon.com/Advances-Artificial-General-Intelligence-Architectures/dp/1586037587
The new piece of info in this email is that, as of now, all the contents
are
This looks interesting...
http://www.nvidia.com/page/home.html
Anyone know what are the weaknesses of these GPU's as opposed to
ordinary processors?
They are good at linear algebra and number crunching, obviously.
Is there some reason they would be bad at, say, MOSES learning?
(Having 128
Hi YKY,
The problem is that right now I'm not joining Novamente because I have some
different AGI ideas that you may not be willing to accept. And I don't
blame you for that. If I were to join NM, I'd like to make significant
modifications to it, or at least branch out from yours and to
If you're not proposing a better scheme for collaboration, and you
criticize my scheme in a non-constructive way, then effectively you're just
saying that you're not interested in collaborating at all. And that's
kind of sad, given that we're still so far from AGI.
YKY, I think there are two
To organize average people to work together you have to give rewards.
Finally, I really, *really* don't believe this either (unless you want
to insist that the satisfaction of a challenge met or a job well done -- or
the warm fuzzy that you get when you help someone -- are rewards). You
don't
I believe:
The practice of representing knowlege using high-dimensional
numerical vectors ;-)
On 6/6/07, Bob Mottram [EMAIL PROTECTED] wrote:
What is vectorianism ?
On 06/06/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Hello Josh,
Do you find a2i2 convincing? Their vectorianism seems to
I think Novamente's current mode of operation is pretty much semi-open
already (since so many have worked there) and is just a small step from
using my consortium idea -- if Ben is willing to give up complete control of
his AGI design
YKY
YKY --
As you are writing about my own
Mike, putting together a demo of a machine learning system recognizing
objects from simple line drawings would take me less than one month, using
textbook technologies. Not worthwhile. Putting together a simple
reinforcement learning system doing the same stuff as NM does in that fetch
video
Sure, but the nature of AGI is that wizzy demos are likely to come fairly
late in the development process. All of us actively working in the field
understand this
-- Ben G
On 6/6/07, Mike Tintner [EMAIL PROTECTED] wrote:
Ben,
I'd be looking for a totally different proof-of-concept for
I think he's just saying to
-- make a pool of N shares allocated to technical founders. Call this the
Technical Founders Pool
-- allocate M options on these shares to each technical founder, but with a
vesting condition that includes the condition that only N of the options
will ever be vested
PSS Ben I loved reading your blog. Pls keep it up. If you ever have time,
let us know why, of the 3 different AGI approaches you entertained, you went
with Novamente instead of the Hebbian neural net (and the theorem proving
one)... us scruffies would like to know... is it just your
4. *Accept members as broadly as possible*. A typical AGI company
usually interviews potential candidates, sign NDAs, and then see if their
skills align with the company's project. After such a screening
many candidates with good ideas may not be hired. The consortium is to
remedy this by
scenarios
or evaluate test runs. Those who are good coders but without much AI
knowhow could be put to work developing simulation environments, or
just generally improving the quality of animations or other stuff
which will add to the presentation.
On 03/06/07, Benjamin Goertzel [EMAIL PROTECTED
On 6/3/07, John G. Rose [EMAIL PROTECTED] wrote:
It needs a Visual Studio 2005 Solution file in the source distro. Just
having that would offer much encouragement to would-be developers…
Well, it's an open-source project, so feel free to create such a file ;-)
[As I use OSX and Ubuntu, it
YKY and Mark Waser ...
About innovative organizational structures for AGI projects, let me
suggest the following
Perhaps you could
A)
make the AGI codebase itself open-source, but using a license other than
GPL, which
-- makes the source open
-- makes the source free for noncommercial use
The most important thing by far is having an AGI design that seems
feasible.
Only after that (very difficult) requirement is met, do any of the others
matter.
-- Ben G
On 6/3/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Can people rate the following things?
1. quick $$, ie salary
2.
For me, wanting to make a thinking machine is a far stronger motivator
than wanting to get rich.
Of course, I'd like to get rich, but getting rich is quite ordinary and
boring
compared to launching a positive Singularity ;-p
Being rich for the last N years before Singularity is better than not
Because, unless they take a majority share, they want to know who it is
they're dealing with... i.e. who is controlling the company
One of the most important things an investor looks at is THE PEOPLE who are
controlling the company, and in your scheme, it is not clear who that is...
Yes, you
pretty well how VCs think/operate and the biggest
drawback is going to be that, in order to protect the AGI, we're not going
to be willing to give up a majority share.
- Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED]
*To:* agi@v2.listbox.com
*Sent:* Sunday, June 03, 2007 9:08
I recently visited Google Research and gave a talk there --
and tried my best to estimate the odds that Google has
a secret AGI project going on ;-)
My best current guess is that they do not ... and my reasons
why and some associated thoughts may be found at
I hope to create a project where members feel *happy* in it, instead of
like a torture chamber.
Please note, successful commercial companies and open-source projects do
seem to feature happy participants ...
I am in favor of innovative project structures, but so far as I can tell,
the
I'll keep thinking... Basically what we need is a simple mechanism for
people to share their secret ideas and increase collaboration, and yet don't
lose credit for their contributions.
YKY
--
It's a hard problem. Even within Novamente, which is a small group
Hi,
As it happens that [safe software infrastructure]
would not be anywhere near my top priority
in staffing the Research Program.
My top priorities are
-- AGISim
-- Research Area 8: Cognitive Technologies
If $$ can be raised to significantly fund these aspects, that will be a
start.
This
On 6/2/07, Mark Waser [EMAIL PROTECTED] wrote:
Yes, I believe there're people capable of producing income-generating
stuff in the interim. I can't predict how the project would evolve, but am
optimistic.
Ask Ben about how much that affects a project . . . .
The need to create commercial
, say, figure
out a new sex move that is only effective in really humid
climates ;-)
In short, search is a narrow domain, so that efficiency in search
is not really human-generality AGI .. nor anywhere near...
-- Ben G
On 6/2/07, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Benjamin Goertzel [EMAIL
Matt Mahoney wrote:
So a progression of useful responses to funny videos might be:
1. Retrieving videos that other people have rated as funny (not AI).
2. Looking at videos and deciding which ones are funny (a hard AI problem).
3. Creating new, funny video (a harder AI problem).
Google is
But what if I like political humor?
And recognizing video of a dog falling in a toilet doesn't seem like an
easy
problem to me.
But let's simplify the problem. Is it possible to write a text-only joke
detector? Exactly what makes a joke funny?
Suppose we take thousands of jokes with various
I agree that the crux of getting AGI to work is system-theoretic, i.e.
figuring out how to put all the pieces together at a high level. However,
in figuring this out for Novamente, we also discovered we had to invent some
new algorithmic pieces that were pretty sophisticated, such as
Designing a useful new algorithm may take six months of research and
development, but an implementation of that algorithm will take
something on the order of a week of effort. There is nothing hard
about implementation, a monkey could do it given adequate
instruction. There is no shortcut to
Actually, it is quite possible to patent something purely protectively --
i.e. get the patent but then give everyone in the world the right to freely
use the idea ;-) ... the point being to stop anyone else from fallaciously
patenting it...
ben
On 6/1/07, Russell Wallace [EMAIL PROTECTED]
Patents these days can be done competently for as little as $7K ...
On 6/1/07, J. Andrew Rogers [EMAIL PROTECTED] wrote:
On Jun 1, 2007, at 1:45 PM, Benjamin Goertzel wrote:
Actually, it is quite possible to patent something purely
protectively -- i.e. get the patent but then give everyone
Hmmm...
Proprietary works.
Open source works.
Each has their flaws, but both basically do work for generating software via
collective human effort...
What you are suggesting, sounds like a mess that would not work...
One problem with your suggestion is that the assignment of credit problem
Hi all,
I emailed a little while ago about a symposium on Complexity in Cognition
that
I am organizing for the ICCS conference.
The deadline was previously May 31, but I just got notice that it has been
extended to June 30.
So, there is plenty of time to submit an abstract now ;-)
As I noted
Hi all,
I'm organizing a Symposium session at ICCS
*
necsi*.org/events/*iccs*7/
this year, on the theme of Complexity in Cognition, and thought
some of you on this list might like to participate.
I'm hoping to get an interdisciplinary group of
contributors including folks from AI,
of the short notice, but, figured
I'd give
it a shot anyway...
-- Ben
On 5/28/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Wow, 3 days notice? How long has this been in the works?
Benjamin Goertzel wrote:
Hi all,
I'm organizing a Symposium session at ICCS
*
necsi*.org/events/*iccs*7
It's in Boston, sometime in the interval Oct 28-Nov 2
http://necsi.org/events/iccs7/
On 5/28/07, Michael Lamport Commons [EMAIL PROTECTED] wrote:
Where is it? I would like to do it if it is not too far.
Michael Lamport Commons
-Original Message-
From: Benjamin Goertzel [EMAIL
Handling syntax separately from semantics and pragmatics is hacky
and non-AGI-ish ... but, makes it easier to get NLP systems working at a
primitive level in a non-embodied context
Operator grammar mixes syntax and semantics which is philosophically
correct, but makes things harder
Link grammar
Hi all,
Someone emailed me recently about Searle's Chinese Room argument,
http://en.wikipedia.org/wiki/Chinese_room
a topic that normally bores me to tears, but it occurred to me that part of
my reply might be of interest to some
on this list, because it pertains to the more general issue of
-of-consciousness. But I suggest
that this pathology is due to the unrealistically large amount of computing
resources that the rulebook requires.
Not by my definition of intelligence (which requires learning/adaptation).
- Original Message -
*From:* Benjamin Goertzel [EMAIL PROTECTED
101 - 200 of 349 matches
Mail list logo