Yes spiders are interesting, and jumping spiders have excellent stereo vision. There's one who patrols the area around my desk, and often jumps onto the monitor or keyboard. If I move my finger in front of it the spider clearly takes an interest and tracks the motion with its eyes.
Last year I
On 30/06/06, Eugen Leitl [EMAIL PROTECTED] wrote:
What I'm impressed is the high acuity, and a detailed world model,built by serial scanning of the world with a high-resolution spot.Very much like us (fovea), but implemented in just 10 neurons. Itwould be definitely interesting to build an
Measures of fitness used to optimise systems can be many and varied. In an application where a mobile robot needs to navigate from one location to another using its sensors you can use measures such as:- the number and type of features detected within the robot's vacinity
- amount of time to make
On 05/09/06, Kingma, D.P. [EMAIL PROTECTED] wrote:
A problem within the AI domain is that Vision has not been solved yet.The existing and functioning algorithms are mainly specialised intosub domains like face recognition etc. These are very nice but are notgeneral enough to use in an arbitrary
I think that's an insightful summary which really matches very well my
experience of people doing academic research on AI. There are
really exceptionally few of the hard core people who are just
relentlessly persuing it year after year. Many people doing
computer science courses take an interest
I would dearly love to have some intercompatible standards for robotics interfaces. There have been a few attempts to define a standard in the past, but none of them have been very successful so far. A few years ago I remember there was something called the Robotics Engineering Task Force which
In child development understanding seems to considerably precede the ability to articulate that understanding. Also development seems to generally move from highly abstract representations (stick men, smily suns) to more concrete adult-like ones.
On 23/10/06, justin corwin [EMAIL PROTECTED] wrote:
You can get depth information from single camera motion (eg Andrew Davison's MonoSLAM), but this requires an initial size calibration and continuous tracking. If the tracking is lost at any time you need to recalibrate. This makes single camera systems less practical. With a stereo camera the
On 23/10/06, Neil H. [EMAIL PROTECTED] wrote:
I'm also pretty surprise that they haven't done anything major withtheir vSLAM tech:http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1570091
Evolution really failed to capitalise upon their early success. One of their biggest mistakes was to make
On 24/10/06, Pei Wang [EMAIL PROTECTED] wrote:
Any comments on Microsoft Robotics Studio?The microsoft robotics studio is quite an unimpressive release. I had expected to see user friendly IDEs and drag-and-drop function block programming, but there's none of that. About the best I can say is that
I don't know if the Novamente baby is going to be anything like a human baby. If it is, this article might be of interest. Design methodologies for central pattern generators: an application to crawling humanoids
http://birg2.epfl.ch/publications/fulltext/righetti06d.pdfAlso for some more
-budget programmable robot (say, inthe price range of Robosapien V2 and LEGO Mindstorms NXT), which one
will you recommend? I won't have high expectation in performance, butwill be interested in testing ideas on the coordination of perception,reasoning, learning, and action.PeiOn 10/24/06, Bob Mottram
, Charles D Hixson [EMAIL PROTECTED] wrote:
Bob Mottram wrote:
On 17/11/06, *Charles D Hixson* [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
A system understands a situation that it encounters if it
predictably
acts in such a way as to maximize the probability of achieving it's
.) For the first example, I was thinking of peek-a-boo.
Bob Mottram wrote:
Goals don't necessarily need to be complex or even explicitly
defined. One goal might just be to minimise the difference between
experiences (whether real or simulated) and expectations. In this way
the system learns what
This looks like the stuff I was doing 15 years ago. I started off being
very interested in neural networks, which were all the rage at the time. I
used backpropogation and other methods both supervised and unsupervised.
Like this guy I also tried unsupervised learning of classifiers followed by
I was also reading that article. The place cell phenomena has been known
for many years. For a long time I've thought that sleep might be used for
something other than just down time and cellular repair, and this research
does seem to confirm that sleep has some functional role. It's
On 06/01/07, Philip Goetz [EMAIL PROTECTED] wrote:
I worked for a robotics company called Arctec in the early 1980s.
We built a robot called the Gemini. They essentially solved the
navigation problem - in an office-space world. You stuck one small
reflector on both sides of every door, at
On 06/01/07, Mike Dougherty [EMAIL PROTECTED] wrote:
I really want to see a central traffic computer take driving away from all
the unqualified (or disinterested) drivers on the roads. I'd really like to
see companies get incentives to allow knowledge workers work from home
offices to save
On 06/01/07, Gary Miller [EMAIL PROTECTED] wrote:
I like the idea of the house being the central AI though and communicating
to
house robots through an wireless encrypted protocol to prevent inadvertant
commands from other systems and hacking.
This is the way it's going to go in my opinion.
There should also be a rating facility, where the person receiving the
telerobot service can provide feedback on how well the job had been done.
High scoring teleoperators would be more likely to get work than ones who
just picked your tools up and threw them around.
Within a few years I think
Ah, but is a thermostat conscious ?
:-)
On 12/01/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
http://www.thermostatshop.com/
Not sure what you've been Googling on but here they are.
There's even one you can call on the telephone
If there's a market for this, then why can't I even
Actually it doesn't matter what convention you use. You could simply have
an entry box on the screen, with a prompt saying please type a short
statement that you believe to be either true or false. Some parsing can do
the rest. To avoid getting too verbose simply restrict the maximum number
of
I think all these are excellent suggestions.
On 13/01/07, Joel Pitt [EMAIL PROTECTED] wrote:
Some comments/suggestions:
* I think such a project should make the data public domain. Ignore
silly ideas like giving be shares in the knowledge or whatever. It
just complicates things. If the
specific aspect
of it's being.
I would say that until we have software that can learn new free format
information as we do and
modify it's goal stack based upon that new information then we do not have
a truly concious
computer.
--
*From:* Bob Mottram [mailto:[EMAIL
Another way to group the data might be to tease it out into dimensions of
what, where, when and whom. There does seem to be some neurological
evidence for this kind of categorization. Also, by indexing the data along
these lines it allows you to some extent to make meaningful interpolations
My feeling is that this probably isn't a great business idea. I think
collecting common sense data and building that into a general reasoner
should really be thought of as a long term effort, which is unlikely to
appeal to business investors expecting to see a return within a few years.
If any
On 19/01/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
How about this: the database would be open for anyone to download, for
experimentation or whatever purpose. Only when someone wants to incorporate
the data in an AGI, would a license fee be needed. Also I would make the
inference
Here's a video in which Donald Michie talks about the early years of AI
(GOFAI), beginning with his discussions with Alan Turing about building a
child machine.
http://www.aiai.ed.ac.uk/events/ccs2002/2002-10-11-michie.qtl
On 20/01/07, Kingma, D.P. [EMAIL PROTECTED] wrote:
Forgot to say:
Some recent experiments detecting neurons and their processes in high
resolution microscope images lead me to believe that the possibility of
reverse engineering the physical structure of the brain might not be as far
off as perhaps many people believe.
However, knowing what the 3D structure is
I'm no expert on automated reasoning, but wasn't the original Mindpixel
based fundamentally upon probabilistic representations (coherence values)
whereas Cyc, from what I understand, doesn't represent facts or rules
probabilistically.
- Bob
On 23/01/07, Stephen Reed [EMAIL PROTECTED] wrote:
/07, Bob Mottram [EMAIL PROTECTED] wrote:
I think it would be better to design a system with probabilistic
reasoning as a fundamental component from the outset, rather than trying to
bolt this on as an after thought. I know from doing a lot of stuff with
machine vision that modelling sensor
Pick whatever public domain licence you prefer: GPL, MIT, Apache, or
whatever you believe will prevent legal abuses. In principle though I agree
that data entered by the public should be owned by the public.
On 27/01/07, David Hart [EMAIL PROTECTED] wrote:
On 1/27/07, Charles D Hixson
I've seen the programming language merry-go-round on AI related forums too
many times to become embroiled, but for what it's worth I'm using C# /
.NET. My master plan for robotic domination involves using Mono.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or
Some of the 3D reconstruction stuff being done now is quite impressive (I'm
thinking of things like photosynth, monoSLAM and Moravec's stereo vision)
and this kind of capability to take raw sensor data and turn it into useful
3D models which may then be cogitated upon would be a basic
what is unique about your
approach?
Novamente doesn't involve real robotics right now but the design does
involve occupancy grids and probabilistic simulated robotics, so your
ideas are of some practical interest to me...
Ben
Bob Mottram wrote:
Some of the 3D reconstruction stuff being done now
What attracted me about the DP method was that it's less ad-hoc than
landmark based systems, but the most attractive feature is of course the
linear scaling which is really essential when dealing with large amounts of
data.
On 06/03/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Thanks, this
I can confirm from practical experimentation that 8bit integers are too
coarse to be able to model the probability density of a three dimensional
space using the classic occupancy grid mapping method, but that you can just
about get away with using 16bits for some applications. Personally I'm
That's pretty evil. I wouldn't use the Numenta software simply on the basis
of this licence.
On 07/03/07, Shane Legg [EMAIL PROTECTED] wrote:
It might however be worth thinking about the licence:
Confidentiality. 1. Protection of Confidential Information. You agree that
all code,
RoboTurtle II is still my favourite.
On 12/03/07, Ben Goertzel [EMAIL PROTECTED] wrote:
Hi all,
If you have 2.5 minutes or so to spare, my 13-year-old son Zebulon has
made another Singularity-focused mini-movie:
http://www.zebradillo.com/AnimPages/The%20Shtinkularity.html
This one is
Early stage vision involves the detection of primitive types of geometry -
edges, lines of different orientation, blobs, corners, colours and motion in
different directions. These seem to arise from simple self-organisation due
to the physical properties of neurons and architecture of receptive
On 17/03/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
PS -- remember also the history of UNIVAC. Eckert and Mauchly made some
ground-breaking progress in early computing, including the stored-program
concept which somehow was stolen / co-discovered by von Neumann. Later von
Neumann
It's difficult to judge how impressive or otherwise such demos are, since it
would be easy to produce an animation of this kind with trivial
programming. What are we really seeing here? How much does the baby AGI
know about fetching before it plays the game, and how much does it need to
learn?
I think making direct comparisons between computational power and the animal
kingdom has always been a questionable exercise, but I generally agree with
trying to tackle problems in a similar order that evolution did, because
evolution needed to find incremental solutions.
I've long wanted to
On 21/03/07, Ben Goertzel [EMAIL PROTECTED] wrote:
* use a combination of lidar and camera input
* write code that took this combined input to make a 3D contour map of
the perceived surfaces in the world
* use standard math transforms to triangulate this contour map
* use some AI heuristics
I've seen heated arguments over computer languages on AI-related forums many
times before, so I've no intention of pimping any particular language
specifically for AGI development. Actually I think the state of the art in
software creation at the moment is still rather crummy, and not much
Creating compelling virtual characters for games or online worlds sounds
like a nice direction to go down, which fits well with the overall aims of
producing an AGI. Although most of my own focus is in producing real world
intelligent entities in the form of robotics I've long thought, ever
Yes that's usually the way it works. Initially you need one person or a
small team to produce something which is at least good enough to be run and
tested by others. Improvements can be made from there on.
On 29/03/07, Russell Wallace [EMAIL PROTECTED] wrote:
On 3/29/07, YKY (Yan King Yin)
On 29/03/07, Russell Wallace [EMAIL PROTECTED] wrote:
I think there's at least one good practical reason to avoid doing that, or
at least to do it at arm's length in a potential users discussing potential
features mailing list rather than here's our code as we write it. In the
early stages of
Nice article. It's a white knuckle ride towards the singularity from now
on!
On 14/04/07, Bruce Klein [EMAIL PROTECTED] wrote:
The creation of a superhumanly intelligent AI system could be possible
within 10 years, with an AI Manhattan Project, says Ben Goertzel.
Published on
I mostly agree with this. There are a set of jobs which most creatures need
to be able to perform. Finding food, navigating through space, avoiding
harm, practising skills, predicting near future events, reproducing and so
on. Intelligence could be said to be a function of this combined set of
The prospect of robots causing harm is not only a theoretical SIAI-style
consideration. At the moment I'm adding a manipulator arm to a telerobot,
and the intention here is to allow some useful household jobs to be done
using the robot, such as sweeping or mopping up, via an internet based
On 24/04/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
Mining plus matching, analogy, and interpolation/extrapolation. The key to
making it work is to form the abstractions that allow the robot/AI to
interpret the actions as grasp broom; lower until it touches floor
instead
of move hand to
I've been using the Charmed Labs Qwerk for over a month now and it is a very
neat system which brings together in a single device many of the things
which traditionally are separate, such as computers, digital analog I/O
boards, vision systems, Servo control, etc. Integrating these diverse
Well you've correctly anticipated the next step. I'm adding a manipulator
arm, which is only a little shorter than an adult human arm, so that the
robot will be capable of doing a few useful jobs. The intention here is to
use it for things like sweeping, mopping or dusting.
The robot, which
of the bucket. When your masterpiece is complete you can
call yourself an abstract expressionist and sell the robots works to the
Tate gallery for a million dollars with the title Expressions of post-human
hubris.
On 26/04/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
On 4/26/07, Bob Mottram [EMAIL
It's a shame that Sony discontinued their robotics division. I suspect in
this case this is just anthropomorphisation of some quirk of the robots
control system. What's being described here is the psychological principle
of reciprocation: I give something of value to you, you give something of
When I first saw this on the BBC web site I thought it looked exciting -
maybe the first upload. But on closer inspection it seems to be less
impressive. There is an extremely brief report on what they did, which
looks like merely simulating a large number of neurons on a supercomputer,
without
On 30/04/07, Mike Tintner [EMAIL PROTECTED] wrote:
Best example I can think of is William Calvin saying something like:
the conscious mind is clearly designed to deal with problematic decisions,
where existing solutions won't work. The smartest mind is the one that can
find the correct answer
On 30/04/07, Mike Dougherty [EMAIL PROTECTED] wrote:
graphics, image, redrawn, visualizations - all indicative of a high
degree of visual-spatial thinking. I'm curious, are your own AGI
efforts are modelled on this mode of thought? I ask because I wonder
if the machine intelligence we build
On 01/05/07, Mike Tintner [EMAIL PROTECTED] wrote:
There is no choice about all this. You do not have an option to have a pure
language AGI - if you wish any brain to understand the world, and draw
further connections about the world, it HAS to operate with graphics and
images. Period.
Plato's
On 01/05/07, DEREK ZAHN [EMAIL PROTECTED] wrote:
what exactly do you think my internal simulation processes might
be doing when I read the following sentence from your email?
In short, imagery from visual, acoustic and other sensory modalities
give life through simulation to the basic skeletal
I don't know what algorithms are being referred to in this article -
perhaps a type of monte carlo localization. Does anyone have more
direct references? Also it's only in 2D, which is normal for laser
based mapping.
It's unlikely that we'll see products based on this sort of technology
On 10/05/07, Bo Morgan [EMAIL PROTECTED] wrote:
You could probably buy 10 cheap webcams and put them all around the robot
and get some vision algorithms to turn them into 3D scenes, which are
avoided/mapped? This seems like a pretty well understood and constrained
problem.
This kind of camera
The open source idea sounds great and in general I agree with this
approach. One of the main benefits in my view is ensuring that
powerful new technology does not fall into the hands of any single
individual, company or nation who could then monopolise its use,
potentially with unfriendly
In order to differentiate this from the rest of the robotics crowd you
need to avoid building a specialised pinball playing robot. If the
machine can learn and form concepts based upon its experiences it
should be able to do so with any kind of game, provided that suitable
actuators are
In a recent interview
(http://discovermagazine.com/2007/jan/interview-minsky/) Marvin Minsky
says that one of the key things which an intelligent system ought to
be able to do is reason by analogy.
His thoughts tumbled in his head, making and breaking alliances
like underpants in a dryer
On 12/05/07, Mike Tintner [EMAIL PROTECTED] wrote:
(I think this area is worth a lot of discussion - it's so important to
AGI).
As Minsky and yourself seem to agree this does seem to be a central
issue, since it cuts to the heart of how we are able to represent
things and then make meaningful
are not.
-- Ben
On 5/12/07, Richard Loosemore [EMAIL PROTECTED] wrote:
Pei Wang wrote:
On 5/12/07, Bob Mottram [EMAIL PROTECTED] wrote:
In a recent interview
(http://discovermagazine.com/2007/jan/interview-minsky/ )
Marvin Minsky
says that one of the key things which an intelligent system
The SIAI videos which are up on google so far look ok. I didn't know
that they were actually trying to *build* an AI, as opposed to just
raising the generally relevant issues.
To the layman this will just look like bunk, since many of these
issues aren't yet within the popular zeitgeist. If I
An old poem of mine, vaguely AI related:
Factory worker
the watchmakers strongest son
arms cascading through space
time, the unforgiving metronome
on a stage of smoke and tarnish
perhaps only those metal hands
worn down by years of toil
could tell the stories of your forefathers
On
The various categories described in kinds of minds seem sensible,
and will probably become part of the standard AGi terminology.
On 01/06/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Ray Kurzweil has arranged to put a couple of sample chapters up on his site:
Kinds of Minds
Ownership of things and establishing who owns what seems to be very
important to humans. One time I bought my two young nephews identical
toys, and then subsequently watched them fighting over who owned which
toy - even though they were exactly alike. What does it mean to own
something, and do
On 02/06/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
the market). Anyway I propose to remedy this problem by fixing the license
price of all patents we acquire, by applying a fixed formula based on
individuals' assessment of their contributions.
From having worked on open source projects
One way in which you might be able to make use of many members who may
be interested in AGI but lack the background knowledge or programming
skills might be to develop scripting languages or IDEs which would
allow volunteers (payed or otherwise) to generate training scenarios
or evaluate test
One possible method of becoming an AGI tycoon might be to have the
main core of code as conventional open source under some suitable
licence, but then charge customers for the service of having that core
system customised to solve particular tasks. The licence might permit
use of the code for
On 04/06/07, Mark Waser [EMAIL PROTECTED] wrote:
One possible method of becoming an AGI tycoon might be to have the
main core of code as conventional open source under some suitable
licence, but then charge customers for the service of having that core
system customised to solve particular
On 04/06/07, Derek Zahn [EMAIL PROTECTED] wrote:
I wonder if a time will come when the personal security of AGI researchers or
conferences will be a real concern. Stopping AGI could be a high priority
for existential-risk wingnuts.
I think this is the view put forward by Hugo De Garis. I
I remember last year there was some talk about possibly using Lojban
as a possible language use to teach an AGI in a minimally ambiguous
way. Does anyone know if the same level of ambiguity found in
ordinary English language also applies to sign language? I know very
little about sign language,
What is vectorianism ?
On 06/06/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Hello Josh,
Do you find a2i2 convincing? Their vectorianism seems to resonate
with your ideas.
Thank You.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options,
Here's my book list:
http://www.librarything.com/catalog/motters
On 07/06/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Sorry, I've skipped these:
Estimation of Distribution Algorithms: A New Tool for Evolutionary
Computation (Genetic Algorithms and Evolutionary Computation)
(Hardcover)
I have one of Richard Sutton's books, and RL methods are useful but I
also have some reservations about them. Often in this sort of
approach a strict behaviorist position is adopted where the system is
simply trying to find an appropriate function mapping inputs to
outputs. The internals of the
On 25/06/07, Robert Wensman [EMAIL PROTECTED] wrote:
To put it bluntly: Isn't the concept of reinforcement learning (as typically
described) exactly the kind of simplistic AI models that AGI tries to
distance itself from?
It can be. The sorts of problem which I've seen RL applied to were of
a
From what I've seen so far HTM has only been applied to very trivial
binary images far less complex than medical images or more ordinary
scenes. If they can return reasonable results from camera images in a
way which is invariant to scale, translation and rotation then I'll be
impressed. As I
Yes there's no reason why the fitness criteria or rules of the game
can't also be subject to exploration during play. I suppose this is
dangerous only in the sense that an AGI's high level or top level
motivations could be subject to change during playful characterization
of the space of
I think laziness in teenagers has more to do with
physiological/hormonal changes which simply won't apply to an AGI.
Being playful does not imply laziness. Also AGIs may have far more
spare computational resource which they can dedicate to playful
exploration of possible outcomes than is the
Games - and mathematics is the biggest game of all if you ask me
- are the important things in life.
- Marvin Minsky
On 06/07/07, Pete de Lepper [EMAIL PROTECTED] wrote:
I guess what I'm thinking is how would an AGI determine how much of it's
time should be spent playing? If you impose
In principle video on the internet may become a useful resource for
AGI learning. However in practice the previous comments hand wave
over a lot of very complex details, in a similar manner to how some of
the early AI pioneers believed that visual interpretation would be
easy.
To be able to
paradigm,
and its really only after many years of trying to make this sort of
thing work that I had to change my views.
On 05/08/07, a [EMAIL PROTECTED] wrote:
Bob Mottram wrote:
it seems infeasible that 2D templates
need to be created for every possible viewing angle and scale of an
object
On 28/09/2007, Don Detrich - PoolDraw [EMAIL PROTECTED] wrote:
I find it
interesting that some of you are so nervous about promoting your own
industry.
This is because in the history of the field there have been no
shortage of self-promoters who never really delivered on their
promises. It's
On 03/10/2007, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Given (1), no context-free analysis can understand natural language.
Given (2), no adaptive agent can learn (proper) understanding of natural
language given only texts.
For human-like understanding, an AGI would need to participate in
On 03/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
RSI is not necessary for human-level AGI.
I think it's too early to be able to make a categorical statement of
this kind. Does not a new born baby recursively improve its thought
processes until it reaches human level ?
-
This list
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Linas Vepstas wrote:
Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would not be what I'd
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
This seems very vague. I would suggest that if there is no clear
hat activity, or traditional forms of
despotism. There seems to be a clarity gap in the theory here.
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Bob Mottram wrote:
On 04/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
As to exactly how, I don't know, but since the AGI
Economic libertarianism would be nice if it were to occur. However,
in practice companies and governments put in place all sorts of
anti-competitive structures to lock people into certain modes of
economic activity. I think economic activity in general is heavily
influenced by cognitive biases
On 10/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?
Agreed. There are many other forums where political ideology can be debated.
-
This list is sponsored by AGIRI:
On 16/10/2007, John G. Rose [EMAIL PROTECTED] wrote:
Part of the reason AI has so much damaged credibility is that over the past
decades there have always been these predictions that by some year robots
will be doing this or robots will be doing that. Any idiot can make
predictions for 2050.
In the case of an AI (presumably a robot) negotiating around humans I
expect that the way in which this would be done would be quite
different from the way that humans do it. In the human case the
circuitry controlling walking direction and speed is substantially the
same for two individuals
On 17/10/2007, David Orban [EMAIL PROTECTED] wrote:
There are now Department of Labor predictions of 50%-80% unemployment
rates due to automation of white collar jobs. This in my opinion is
not a small matter either.
On the unemployment question I remain optimistic. If you go back a
few
Despite these arguments there are good reasons for caution. When you
look at the history of AI research one thing tends to stand out - some
people never seem to learn of the dangers of hype. Having been around
for a while I've heard many individuals make a ten years to SAI type
of prediction,
1 - 100 of 278 matches
Mail list logo