Gents,
I was prompted to write up the following by a discussion (argument?) I'm
having with Marvin Minsky:
Enjoy,
Josh
-
Programming at the Edge of Cybernetics
The two major approaches to engineering the mind, cybernetics and AI,
differ sharply in approach. Cybernetics was based on
One would hope that a good lossy compression would
(a) regularize writing style
(b) correct mispellings
(c) correct contradictions
and possibly other beneficial effects, as well as omitting trivia.
I'll bet that a multilevel HMM could do a fairly decent job of a and b, and
maybe a little bit of
On Tuesday 15 August 2006 09:03, Ben Goertzel wrote:
Yes, but the compression software could have learned stuff before
trying the Hutter Challenge, via compressing a bunch of other files
... and storing the knowledge it learned via this experience in its
long-term memory...
This could have a
On Monday 25 September 2006 16:48, Ben Goertzel wrote:
My own view is that symbol grounding is not a waste of time ... but,
**exclusive reliance** on symbol grounding is a waste of time.
It's certainly not a waste of time in the general sense, especially if you're
going to be building a robot!
On Friday 24 November 2006 06:03, YKY (Yan King Yin) wrote:
You talked mainly about how sentences require vast amounts of external
knowledge to interpret, but it does not imply that those sentences cannot
be represented in (predicate) logical form.
Substitute bit string for predicate logic
On Saturday 25 November 2006 12:42, Ben Goertzel wrote:
I'm afraid the analogies between vector space operations and cognitive
operations don't really take you very far.
For instance, you map conceptual blending into quantitative
interpolation -- but as you surely know, it's not just **any**
On Saturday 25 November 2006 13:52, Ben Goertzel wrote:
About Teddy Meese: a well-designed Teddy Moose is almost surely going
to have the big antlers characterizing a male moose, rather than the
head-profile of a female moose; and it would be disappointing if a
Teddy Moose had the head and
My best ideas at the moment don't have one big space where everything sits,
but something more like a Society of Mind where each agent has its own space.
New agents are being tried all the time by some heuristic search process, and
will come with new dimensions if that does them any good.
On Sunday 26 November 2006 14:14, Pei Wang wrote:
In this design, the tough job is to make the agents working together
to cover all kinds of tasks, and for this part, I'm afraid that the
multi-dimensional space representation won't help much. Also, we
haven't seen much work on high-level
On Sunday 26 November 2006 18:02, Mike Dougherty wrote:
I was thinking about the N-space representation of an idea... Then I
thought about the tilting table analogy Richard posted elsewhere (sorry,
I'm terrible at citing sources) Then I starting wondering what would
happen if the N-space
On Monday 27 November 2006 11:49, YKY (Yan King Yin) wrote:
To illustrate it with an example, let's say the AGI can recognize apples,
bananas, tables, chairs, the face of Einstein, etc, in the n-dimensional
feature space. So, Einstein's face is defined by a hypersurface where each
point is
On Monday 27 November 2006 10:35, Ben Goertzel wrote:
Amusingly, one of my projects at the moment is to show that
Novamente's economic attention allocation module can display
Hopfield net type content-addressable-memory behavior on simple
examples. As a preliminary step to integrating it with
On Monday 27 November 2006 10:35, Ben Goertzel wrote:
...
An issue with Hopfield content-addressable memories is that their
memory capability gets worse and worse as the networks get sparser and
sparser. I did some experiments on this in 1997, though I never
bothered to publish the results
On Tuesday 28 November 2006 14:47, Philip Goetz wrote:
The use of predicates for representation, and the use of logic for
reasoning, are separate issues. I think it's pretty clear that
English sentences translate neatly into predicate logic statements,
and that such a transformation is likely
On Tuesday 28 November 2006 17:50, Philip Goetz wrote:
I see that a raster is a vector. I see that you can have rasters at
different resolutions. I don't see what you mean by map the regions
that represent the same face between higher and lower-dimensional
spaces, or what you are taking the
On Wednesday 29 November 2006 13:56, Matt Mahoney wrote:
How is a raster scan (16K vector) of an image useful? The difference
between two images of faces is the RMS of the differences of the images
obtained by subtracting pixels. Given an image of Tom, how do you compute
the set of all
On Wednesday 29 November 2006 16:04, Philip Goetz wrote:
On 11/29/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
There will be many occurances of the smaller subregions, corresponding to
all different sizes and positions of Tom's face in the raster. In other
words, the Tom's face region
On Friday 01 December 2006 20:06, Philip Goetz wrote:
Thus, I don't think my ability to follow rules written on paper to
implement a Turing machine proves that the operations powering my
consciousness are Turing-complete.
Actually, I think it does prove it, since your simulation of a Turing
Yes, indeed, the facility with which we can learn languages expressed by hand
motions (and the fact that control of language and fine motor control for the
hands is intimately bound up in the brain) is one of the reasons that I think
that language and imitating manual skills are strongly
-
From: J. Storrs Hall, PhD. [EMAIL PROTECTED]
Subject: Re: [agi] RSI - What is it and how fast?
I've just finished a book on this subject, (coming out in May from
Prometheus). ...
Thanks!
The book, under the title Beyond AI: Creating the Conscience of the Machine,
is an outgrowth
down the middle of our
sciences of the mind.
A lot of my dynamical systems-based theories of representation are an attempt
to bridge the gap.
--Josh
On Saturday 02 December 2006 15:57, Pei Wang wrote:
On 12/2/06, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
One of the big puzzles in AI
Nope. I think, for example, that the process of evolution is universal -- it
shows the key feature of exponential learning growth, but with a very slow
clock. So there're other models besides a mammalian brain.
My mental model is to ask of a given person, suppose you had a community of
10,000
Couldn't point you at anything systematic, but one good place to find biases
is looking at superstition and magic, where you find things like the illusion
of control, various tendencies to overanthropomorphize, causality attributed
to similarity and contagion, and so forth.
J
On Tuesday 13
One reason for picking a language more powerful than the run-of-the-mill
imperative ones (of which virtually all the ones mentioned so far are just
different flavors) is that the can give you access to different paradigms
that will enhance your view of how an AGI should work internally.
A
On Sunday 18 February 2007 19:22, Ricardo Barreira wrote:
You can spend all the time you want sharpening your axes, it'll do you
no good if you don't know what you'll use it for...
True enough. However, as I've also mentioned in this venue before, I want to
be able to do general associative
On Monday 19 February 2007 16:08, Ben Goertzel wrote:
It's pretty clear that humans don't
run FOPC as a native code, but that we can learn it as a trick.
I disagree. I think that Hebbian learning between cortical columns is
essentially equivalent to basic probabilistic
term logic.
On Wednesday 21 February 2007 11:52, Aki Iskandar wrote:
I'd be interested in getting some feedback on the book On
Intelligence (author: Jeff Hawkins).
...
The basic premise of the book, if I can even attempt to summarize it
in two statements (I wouldn't be doing it justice though) is:
1 -
I recently ran across this paper by Sussman on robust software systems:
http://swiss.csail.mit.edu/classes/symbolic/spring07/readings/robust-systems.pdf
And I was flabbergasted to find that there was about a 50% overlap with the
ideas behind the system I'm working on. (It's also interesting to
On Tuesday 27 February 2007 10:23, Richard Loosemore wrote:
Yup. As far as I can tell Sussman is coming a little late to the party.
There is an urban legend of AI that ca. 1970, Marvin Minsky thought so little
of the vision problem that he assigned an undergraduate to do it as a
summer
Just noticed this on Slashdot.
Open source but not free software, for those of you for whom this makes a
difference.
http://www.numenta.com/for-developers/software.php
Josh
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
On Wednesday 07 March 2007 10:34, YKY (Yan King Yin) wrote:
I discovered something cool: computational pragmatics. You may take a
look at Jerry R Hobbs' paper: Interpretation as Abduction, ...
Nice. Note that one of the reasons that I'm going the numerical route is that
some powerful methods
On Thursday 08 March 2007 17:42, Mike Dougherty wrote:
Yeah, if I leave a workbench worth of carpentry tools on a pile of
lumber, I don't expect to have an emergent deck arise...
If I understand Minsky's Society of Mind, the basic idea is to have the tools
be such that you can build your deck
On Wednesday 07 March 2007 17:58, Ben Goertzel wrote:
A more interesting question to think about, rather than how to represent
a story in a formal language, is: How would you convince yourself that
your AGI actually understood a story? What kind of question-answers or
behaviors would
of the whole business, especially
for the duration of a specific task, as long as it isn't supposed to have any
more capabilities per se than any other agent.
Josh
On Friday 09 March 2007 07:36, Pei Wang wrote:
On 3/9/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
If I understand Minsky's Society
On Saturday 10 March 2007 14:36, Andrew Babian wrote:
I can't speak for Minsky, but I would wonder what advantage would there be
for having only one agent?
An arbitrator. You have only one body, and it would be counterproductive for
it to try to do different things at the same time. (It's
On Sunday 11 March 2007 06:58, YKY (Yan King Yin) wrote:
1. Fix a knowledge representation scheme (eg CycL, or Novamentese? etc)
The main problem with this is that it seems to assume that there is One True
Knowledge Representation in the system. In the automatic microprocessor
design stuff I
representations of
the pipeline behavior as well. Luckily for my sanity, we never got into
superscalar, out-of-order, or any of that stuff that's de rigeur in modern
processors!
Josh
On Sunday 11 March 2007 20:07, Russell Wallace wrote:
On 3/11/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
The main
On Sunday 11 March 2007 15:07, YKY (Yan King Yin) wrote:
My main point is: a unified KR allows people to *work together*.
That would certainly be nice, but I have yet to be convinced that it's
possible :-)
Let's look at the alternative, which is even more dismal: you have many
On Monday 12 March 2007 09:01, Richard Loosemore wrote:
The word module has implications, some of which I don't think you
really want to buy. If the helvetica-reading module is completely
different from the roman-reading module, why do I find it so easy to
accommodate to a new typeface ...
On Monday 12 March 2007 10:42, Richard Loosemore wrote:
... Overlooking the practical deficiencies of actual Lego as
a material for dealing with food, one could imagine a kind of neoLego
that really was adequate for making all the tools in my kitchen. Grant
me that as a presupposition.
On Tuesday 13 March 2007 01:41, YKY (Yan King Yin) wrote:
In my approach, John loves Mary can be represented as P(loves,john,mary),
where P is some generic predicate. Each of those terms (loves, john
and mary) is defined by lower-level terms. I don't see any problem with
this approach.
How
On Tuesday 13 March 2007 07:26, Eric Baum wrote:
Is there some reason why it is not the most natural thing
to look at the Helevetica Reader (as with pretty much any proper
noun) as an instance in the
class of font readers? It inherits pretty much everything from
existing font readers, except
On Tuesday 13 March 2007 09:41, Ben Goertzel wrote:
One of my questions regarding your approach is why you think that
similar numeric representations are also natural
and efficient for more abstracts sorts of data processing.
I think that once you have symbols, there are lots of cool things
On Tuesday 13 March 2007 10:50, Russell Wallace wrote:
On 3/13/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
A real working logic-based system that did what
it needed to would consist mostly of predicates like
fmult(num(characteristic(Sign1,Bit11,Bit12,...),mantissa(Bitm11,Bitm12
Aha... we are now getting down to some brass tacks :^)
On Tuesday 13 March 2007 12:20, Ben Goertzel wrote:
Numeric vectors are strictly more powerful as a representation than
predicates.
This is not really true...
Touché. My fmult predicate of a few msgs ago is of course a predicate that
On Tuesday 13 March 2007 15:56, Ben Goertzel wrote:
Without taking a lot of time (maybe I'll elaborate more later), the
point is that humans solve analogy problems not (usually) by finding
specific strong analogies, but by finding a huge number
of weak analogies and statistically polling the
On Tuesday 13 March 2007 12:53, Eric Baum wrote:
Are you suggesting that there is in no sense a decision made that
there is a new font to be learned (and possibly reserving physical space).
Definitely not reserving space. I'm not even sure that the new capability
would be in a physically
On Tuesday 13 March 2007 20:33, Ben Goertzel wrote:
I am confused about whether you are proposing a brain model or an AGI
design.
I'm working with a brain model for inspiration, but I speculate that once we
understand what it's doing we can squeeze a few orders of magnitude
optimization out
On Tuesday 13 March 2007 22:34, Ben Goertzel wrote:
J. Storrs Hall, PhD. wrote:
On Tuesday 13 March 2007 20:33, Ben Goertzel wrote:
I am confused about whether you are proposing a brain model or an AGI
design.
I'm working with a brain model for inspiration, but I speculate that once
On Wednesday 14 March 2007 03:30, Eugen Leitl wrote:
We don't. Intelligence looks like 10^23 ops/s on 10^17 sites country.
Pulling numbers out of /dev/ass is easy; anyone can do it.
In my previous msg that one referred to, I quoted my figures as being from
Kurzweil Moravec respectively. After
On Wednesday 14 March 2007 08:05, Eugen Leitl wrote:
You might find the authors have a bit more credibility than
Moravec, and especially such a notorious luminary like Kurzweil
http://www.kurzweiltech.com/aboutray.html
Besides writing books, Kurzweil builds systems that work.
I'm not
On Wednesday 14 March 2007 06:44, Ben Goertzel wrote:
Here is a brain question though: In your approach, the recursive
build-up of patterns-among-patterns-
...-among-patterns seems to rely on the ability to treat transformations
(e.g. matrices, or perhaps
nonlinear transformations
On Wednesday 14 March 2007 15:30, Eugen Leitl wrote:
The reason Drexler proposed scaling down the Difference Engine is not
because he considered them practical, but because they're easy to analyze.
But more to the point to put a LOWER bound on computational capacity of
nanosystems.
I'm not
In Prologesque syntax, let points be defined as
p(X, Y, Z, ...).for all points (X, Y, Z, ...) in that space. For
concreteness, let's use
p(X,Y,Z).
Now in Prolog, we can use p as a function of X and Y by calling, e.g.
p(17, 54, Z) which will return with Z bound to the result. Thinking
On Wednesday 14 March 2007 20:00, Ben Goertzel wrote:
...
Then, we can submit a query of the form
m(specific state, specific input, variable output, variable next state)
= m(S,I,$O,$NS)
using $ to precede variables
So far, so good. The relation is simply being used to store the table
On Thursday 15 March 2007 02:16, Kevin Peterson wrote:
Hmm...was the 1MB just a blue sky guess, or did you follow a similar
chain of reasoning?
Vaguely similar, but including some intuitions about software as well.
Josh
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To
What's the size of the space NM is searching for this plan?
If you rewarded it for, say, regularities in arithmetic, starting with set
theory, how long would it take it to come up with, say, Goldbach's
conjecture?
Josh
On Saturday 17 March 2007 16:05, Ben Goertzel wrote:
Hi all,
This
On Monday 19 March 2007 17:30, Ben Goertzel wrote:
...
My own view these days is that a wild combination of agents is
probably not the right approach, in terms of building AGI.
Novamente consists of a set of agents that have been very carefully
sculpted to work together in such a way as to
Look also at Ontic:
http://lambda-the-ultimate.org/classic/message6641.html
http://ttic.uchicago.edu/%7Edmcallester/ontic-spec.ps
http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/kr/systems/ontic/0.html
http://citeseer.ist.psu.edu/witty95ontic.html
Josh
On Saturday 21 April 2007
On Monday 23 April 2007 10:03, Matt Mahoney wrote:
... The brain is a billion times slower per step, has only about 7
words of short term memory, ...
For some appropriate meaning of word -- I'd suggest that frame might be
more useful in thinking about what's going on. One of Miller's magical
On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote:
... An AGI working with bigger numbers had better discovered binary
numbers. Could an AGI do it? Could it discover rational numbers? (It
would initially believe that irrational numbers do not exist, as early
Pythagoreans have believed.)
He who refuses to do arithmetic is doomed to talk nonsense.
- John McCarthy
We're talking about relative numbers here. Suppose you had an AI algorithm
that was exactly as good as the one the human brain uses. In fact, let's
suppose you had one that was two orders of magnitude better,
:43, Samantha Atkins wrote:
On Apr 23, 2007, at 2:05 PM, J. Storrs Hall, PhD. wrote:
On Monday 23 April 2007 15:40, Lukasz Stafiniak wrote:
... An AGI working with bigger numbers had better discovered binary
numbers. Could an AGI do it? Could it discover rational numbers? (It
would
On Monday 23 April 2007 19:45, Matt Mahoney wrote:
... How do you distinguish between consciousness (sense of self) and the
programmed belief in consciousness, free will, and fear of death that all
animals possess because it confers a survival advantage?
A distinction without a difference, I
On Tuesday 24 April 2007 07:42, Bob Mottram wrote:
Incidentally, once a significant amount of data is recorded from human use
of a telerobot getting the robot to do some things autonomously becomes a
data mining exercise.
Mining plus matching, analogy, and interpolation/extrapolation. The key
On Tuesday 24 April 2007 18:06, Eliezer S. Yudkowsky wrote:
Aside from that, it [the U. of Phoenix test] sounds fair enough to me,
and unlike the Turing Test it might not require strongly superhuman
intelligence.
The Turing Test doesn't require superhuman intelligence, strong or otherwise.
(With a wink towards Ben)
Cheney goes into the Oval Office with the latest war report.
Three Brazilian soldiers died in the latest bombing.
Bush gasps and turns ash-gray.
Oh my God! he says. His brow wrinkles in thought.
How many is a brazillion?
Perhaps a thermostat has military
On the face of it, this isn't anything more than Parry did. If you have a
substrate that can interpret actions and situations into emotional inputs (I
just got insulted) the overall emotional control mechanism can be modelled
by an embarrassingly simple system of differential equations.
Josh
On Friday 27 April 2007 17:44, John G. Rose wrote:
So then we decide OK let's try to measure the temperature of a black hole
so we jettison the X1117 into the black hole and right before it passes the
event horizon it converts itself into radially emitted neutrinos by
sacrificing itself and
I think YKY is right on this one. There was a Dave Barry column about going to
the movies with kids in which a 40-foot image of a handgun appears on the
screen, at which point every mother in the theater turns to her kid and says,
Oh look, he's got a GUN!
Communication in natural language is
I disagree with this two ways. First, it's fairly well accepted among
mainstream AI researchers that full NL competence is AI-complete, i.e. that
human-level intelligence is a prerequisite for NL. Secondly, even the parsing
part of NLP is part of a more general recursive sequence
On Saturday 28 April 2007 09:02, Benjamin Goertzel wrote:
In other words: I became convinced that in the developmental approach, if
you want to take the human child language learning metaphor at all
seriously, you need to go beyond pure language learning and take an
experientially grounded
In case anyone is interested, some folks at IBM Almaden have run a
one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene (in
0.1 real time):
http://news.bbc.co.uk/2/hi/technology/6600965.stm
http://ieet.org/index.php/IEET/more/cascio20070425/
On Tuesday 01 May 2007 14:06, Benjamin Goertzel wrote:
In particular, emotions seem necessary (in humans) to a) provide goals,
b) provide pre-programmed constraints (for when logical reasoning doesn't
have enough information), and c) enforce urgency.
...
So, IMO, it becomes a toss-up,
On Wednesday 02 May 2007 15:08, Charles D Hixson wrote:
Mark Waser wrote:
... Machines will know
the meaning of text (i.e. understand it) when they have a coherent
world model that they ground their usage of text in.
...
But note that in this case world model is not a model of the same
On Saturday 05 May 2007 23:29, Matt Mahoney wrote:
About programming languages. I do most of my programming in C++ with a
little bit of assembler. AGI needs some heavy duty number crunching. You
really need assembler to do most any kind of vector processing, especially
if you use a
Consider a ship. From one point of view, you could separate the people aboard
into two groups: the captain and the crew. But another just as reasonable
point of view is that captain is just one member of the crew, albeit one
distinguished in some ways.
One could reasonably take the point of
On Sunday 06 May 2007 07:49, Benjamin Goertzel wrote:
As Nietzsche put it, from a functional point of view, consciousness is like
the general who, after the fact, takes responsibility for the largely
autonomous actions of his troops ;-)
That's actually pretty close to the way (I think) it
On Sunday 06 May 2007 10:18, Mike Tintner wrote:
Consider a ship. From one point of view, you could separate the people
aboard into two groups: the captain and the crew. But another just as
reasonable point of view is that captain is just one member of the crew,
albeit one distinguished in
On Sunday 06 May 2007 09:47, Mike Tintner wrote:
And if you're a betting man, pay attention to Dennett. He wrote about
Consciousness in the early 90's, together with Crick helped make it
scientifically respectable.
Actually, the serious study of consciousness was made respectable by Julian
On Sunday 06 May 2007 09:47, Mike Tintner wrote:
For example - and this is the real issue that concerns YOU and AGI - I just
introduced an entirely new dimension to the free will debate.
Everybody and his dog, especially the philosophers, thinks that they have some
special insight into free
On Sunday 06 May 2007 17:59, J. Andrew Rogers wrote:
On May 6, 2007, at 2:27 PM, J. Storrs Hall, PhD. wrote:
The only person, for my money, who has really seen through it is Drew
McDermott, Yale CS prof (former student of Minsky). ...
Eh? Unless McDermott first came up with that idea long
In Beyond AI I have a taxonomy (and Kurzweil picked that chapter, among
others, to post on his site). in brief:
Hypohuman AI -- below human ability and under human control
Diahuman AI -- somewhere in the human range (which is large!)
Epihuman AI -- smarter/more capable than human, but equivalent
impossible (unless someone can
pull out that magic definition of intelligence). It also seems *very*
human-centric to compare everything to humans..
Maybe measuring intelligence is like measuring how good a tool is. It
depends on what you need it for.
2007/5/9, J. Storrs Hall, PhD. [EMAIL
article in NS about the Purdue guessing robot navigators...
http://www.newscientisttech.com/article/dn11805-guessing-robots-navigate-faster.html
I think I get a toljaso on this one --- if the architecture were composed of
modules that did CBR, each in its own language, from the very start, this
On Friday 11 May 2007 05:16:44 am Bob Mottram wrote:
...
But in practice it's difficult to do AI in an open source way, because
I've found that at least up until the present there have been very few
people who actually know anything about the algorithms involved and
can make a useful
On Friday 11 May 2007 02:01:09 pm Mike Tintner wrote:
...
As Daniel Wolpert will tell you, the sea squirt devours its brain as soon as
it stops moving.
As Dan Dennet has pointed out, this resembles what happens when one gets
tenure...
In the final and the first analysis, the brain is a
Right. The key issue is autogeny in the mental architecture. Learning will be
unsupervised to start, with internal feedback from how well the system is
expecting what it sees next. Then we move into a mode where imitation is the
key, with the system trying to do what a person just did (e.g.
On Friday 11 May 2007 08:55:12 pm Mike Tintner wrote:
...All these machines you are talking about are
basically inert lumps of metal and don't exist without human beings to
switch them on, feed them interpret them.
Same is true of a baby, except for the part where you can turn it off and
On Friday 11 May 2007 09:15:56 pm Mike Tintner wrote:
I'm saying the last 400 years have been framed by Descartes' and science's
mind VERSUS body dichotomy. That in turn has been expressed in a whole
variety of subsidiary dichotomoies and cultural battles:
... mind vs body
... reason vs
On Friday 11 May 2007 08:26:03 pm Pei Wang wrote:
*. Meaning come from experience, and is grounded in experience.
I agree with this in practice but I don't think it's necessarily,
definitionally true. In practice, experience is the only good way we know of
to build the models that provide us
On Saturday 12 May 2007 09:18:16 am Mike Tintner wrote:
Josh:My major hobby-horse in this area is that a concept has to be an active
machine, capable of recognition, generation, inference, and prediction.
This sounds very like Jeff Hawkins, (just reading On Intelligence now). Do
you see
On Saturday 12 May 2007 10:37:29 am Bob Mottram wrote:
In a recent interview
(http://discovermagazine.com/2007/jan/interview-minsky/) Marvin Minsky
says that one of the key things which an intelligent system ought to
be able to do is reason by analogy.
...
Which made me wonder, can
Thanks! I'll be in touch.
Josh
On Saturday 12 May 2007 10:08:26 am Derek Zahn wrote:
[EMAIL PROTECTED] writes:
Help from anyone on this list with experience with the GNU toolchain on
ARM-based microcontrollers will be gratefully accepted :-)
I have a lot of such experience and would
On Sunday 13 May 2007 06:10:59 pm Mike Tintner wrote:
c)has anyone incorporated in their AI/AGI system, as my ideas suggest
they should, a cartoon unit and a movie unit, for the purposes of reasoning?
Inasmuch as mine (still gotta come up with a name) hunts thru N-spaces for
useful
On Sunday 13 May 2007 08:14:43 am Kingma, D.P. wrote:
John, as I wrote earlier, I'm very interested in learning more about your
particular approach to:
- Concept and pattern representation. (i.e. types of concept, patterns,
relations?)
As I mentioned in another note, (about the tennis ball),
On Saturday 12 May 2007 10:24:03 pm Lukasz Stafiniak wrote:
Do you have some interesting links about imitation? I've found these,
not all of them interesting, I'm just showing what I have:
Thanks -- some of those look interesting. I don't have any good links, but I'd
reccomend Hurley Chater,
On Monday 14 May 2007 11:02:33 am Benjamin Goertzel wrote:
We use some probability theory ... and some of the theory of rewriting
systems, lambda calculus, etc. This stuff is in a subordinate role to a
cognitive-systems-theory-based design, but is still very useful...
ditto -- and for my
Hmmm. If Goldbach's conjecture is true (and provable), the program will loop
forever and is provably non-intelligent. If it's false, there's a
counterexample and it's intelligent. (Assuming you mean by halt to go on to
the AIXItl part). The overall program is only a stumper if Goldbach is
Nine out of ten kids come home from a baseball game having seen their favorite
player win the game with a home run, and want to play baseball and make
spectacular plays. The tenth kid is instead inspired to do a brilliant
science fair project: Excel at what you love, do best, and are expected
1 - 100 of 278 matches
Mail list logo