If you can do better voice recognition, that's a significant
application in its own right, as well as having uses in other
applications e.g. automated first layer for call centers.
If you can do better image/video recognition, there are a great many
uses for that -- look at all the things people
I don't often request list moderation, but if this kind of off-topic spam
and clueless trolling doesn't call for it, nothing does, so: I hereby
request that a moderator take appropriate action.
On Mon, Aug 2, 2010 at 3:40 PM, Steve Richfield
Sometime when you are
I spent a while back in the 90s trying to make AGI and alife converge,
before establishing to my satisfaction the approach is a dead end: we
will never have anywhere near enough computing power to make alife
evolve significant intelligence (the only known success took 4 billion
years on a
On Mon, Jun 28, 2010 at 4:54 PM, David Jones davidher...@gmail.com wrote:
But, that's why it is important to force oneself to solve them in such a way
that it IS applicable to AGI. It doesn't mean that you have to choose a
problem that is so hard you can't cheat. It's unnecessary to do that
. It is
great at certain narrow applications, but no where near where it needs to be
On Mon, Jun 28, 2010 at 4:00 PM, Russell Wallace
On Mon, Jun 28, 2010 at 8:56 PM, David Jones davidher...@gmail.com
Having experience with the full problem
On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
That being the case, why don't elephants and other large creatures have
really gigantic brains? This seems to be SUCH an obvious evolutionary step.
Personally I've always wondered how elephants managed to evolve
On Mon, Jun 21, 2010 at 11:05 PM, Steve Richfield
Another pet peeve of mine. They could/should do MUCH more fault tolerance
than they now are. Present puny efforts are completely ignorant of past
developments, e.g. Tandem Nonstop computers.
Or perhaps they
Melting and boiling at least should be doable: assign every bead a
temperature, and let solid interbead bonds turn liquid above a certain
temperature and disappear completely above some higher temperature.
And it occurs to me you could even have fire. Let fire be an element,
whose beads have negative gravitational mass. Beads of fuel elements
like wood have a threshold temperature above which they will turn into
fire beads, with release of additional heat.
Yeah :-) though boiling an egg by putting it in a pot of boiling
water, that much I think should be doable.
On Tue, Jan 13, 2009 at 3:41 PM, Ben Goertzel b...@goertzel.org wrote:
Indeed... but cake-baking just won't have the same nuances ;-)
On Tue, Jan 13, 2009 at 10:08 AM, Russell Wallace
I think this sort of virtual world is an excellent idea.
I agree with Benjamin Johnston's idea of a unified object model where
everything consists of beads.
I notice you mentioned distributing the computation. This would
certainly be valuable in the long run, but for the first version I
On Tue, Jan 13, 2009 at 1:22 AM, Benjamin Johnston
Actually, I think it would be easier, more useful and more portable to
distribute the computation rather than trying to make it to run on a GPU.
If it would be easier, fair enough; I've never programmed a GPU, I
On Fri, Dec 26, 2008 at 11:56 PM, Abram Demski abramdem...@gmail.com wrote:
That's not to say that I don't think some representations are
fundamentally more useful than others-- for example, I know that some
proofs are astronomically larger in 1st-order logic as compared to
On Wed, Dec 17, 2008 at 3:54 PM, Paul Cray pmc...@gmail.com wrote:
In the UK, it is certainly possible to proceed directly to a PhD without
doing an MSc or much in the way of coursework, provided you have a good
enough Bachelor's degree. As a self-funded student, it would just be a
On Wed, Dec 10, 2008 at 5:47 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
I don't see how, because it is completely unbounded and HIGHLY related to
specific platforms and products. I could envision a version that worked for
a specific class of problems on a particular platform, but it would
On Wed, Dec 10, 2008 at 5:35 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Maybe I should adopt the ORCAD model, where I provide it for free for a
while, then start inching the price up and UP and UP.
Bad for PR. I suggest providing a free trial but making it clear from
the outset there will be
As an application domain for Dr. Eliza, medicine has the obvious
advantage of usefulness, but the disadvantage that it's hard to assess
performance -- specific data is largely unavailable for privacy
reasons, and most of us lack the expertise to properly assess it even
if it were available.
On Tue, Nov 25, 2008 at 12:58 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
summed up in the last two words of the abstract: without input. Who
ever said that RSI had anything to do with programs that had no input?
It certainly wasn't a strawman as of a couple of years ago; I've had
On Wed, Nov 5, 2008 at 3:27 PM, Bob Mottram [EMAIL PROTECTED] wrote:
Brains however are not nearly so sensitive to small errors, and in
some cases fairly extensive damage can be sustained without causing
the entire system to fail.
Let's face it, they're not that insensitive; some debugging
On Sun, Nov 2, 2008 at 7:14 AM, Benjamin Johnston
[EMAIL PROTECTED] wrote:
The Prolog clause database effectively has this same problem. It solves it
simply by indexing on the functor of the outermost term and the first
argument of that term. This may be enough for your problem. As Donald Knuth
In classical logic programming, there is the concept of unification,
where one expression is matched against another, and one or both
expressions may contain variables. For example, (FOO ?A) unifies with
(FOO 42) by setting the variable ?A = 42.
Suppose you have a database of N expressions, and
On Fri, Oct 31, 2008 at 8:00 PM, Pei Wang [EMAIL PROTECTED] wrote:
The closest thing I can think of is Rete algorithm --- see
Thanks! If I'm understanding correctly, the Rete algorithm only
handles lists of constants and variables, not general
On Thu, Oct 30, 2008 at 6:45 AM, [EMAIL PROTECTED] wrote:
It sure seems to me that the availability of cloud computing is valuable
to the AGI project. There are some claims that maybe intelligent programs
are still waiting on sufficient computer power, but with something like
On Thu, Oct 30, 2008 at 3:07 PM, John G. Rose [EMAIL PROTECTED] wrote:
My suspicion though is that say you had 100 physical servers and then 100
physical cloud servers. You could hand tailor your distributed application
so that it is extremely more efficient not running on the cloud substrate.
On Thu, Oct 30, 2008 at 3:42 PM, John G. Rose [EMAIL PROTECTED] wrote:
Not talking custom hardware, when you take your existing app and apply it to
the distributed resource and network topology (your 100 servers) you can
structure it to maximize its execution reward. And the design of the app
On Thu, Oct 30, 2008 at 4:04 PM, John G. Rose [EMAIL PROTECTED] wrote:
No, you don't lock it into an instance in time. You make it selectively
When your app or your application's resources span more than one machine you
need to organize that. The choice on how you do so effects
On Sat, Oct 25, 2008 at 9:29 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
There are systems that do just that, constructing models of a program
and representing conditions of absence of a bug as huge formulas. They
work with various limitations, theorem-prover based systems using
On Sat, Oct 25, 2008 at 9:57 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Note that people are working on this specific technical problem for 30
years, (see the scary amount of work by Cousot's lab,
http://www.di.ens.fr/~cousot/COUSOTpapers/ ), and they are still
tackling fixed invariants,
On Sat, Oct 25, 2008 at 11:14 PM, Mark Waser [EMAIL PROTECTED] wrote:
Anyone else want to take up the issue of whether there is a distinction
between competent scientific research and competent learning (whether or not
both are being done by a machine) and, if so, what that distinction is?
I understand that some here have already started a project in a given
language, and aren't going to change at this late date; this is
addressed to those for whom it's still an open question.
The choice of language is said to not matter very much, and there are
projects for which this is true. AGI
On Fri, Oct 24, 2008 at 12:14 AM, Trent Waddington
[EMAIL PROTECTED] wrote:
Well as a somewhat good chess instructor myself, I have to say I
completely agree with it. People who play well against computers
rarely rank above first time players.. in fact, most of them tend to
not even know the
On Fri, Oct 24, 2008 at 10:56 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Why mix AI-written code and your own code?
Example: you want the AI to generate code to meet a spec, which you
provided in the form of a fitness function. If the problem isn't
trivial and you don't have a million years to
On Fri, Oct 24, 2008 at 10:24 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Russel, in what capacity do you use that language?
In all capacities, for both hand written and machine generated content.
Do AI algorithms
write in it?
That's the idea, once said AI algorithms are implemented.
On Fri, Oct 24, 2008 at 11:49 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Well, my point was that maybe the mistake is use of additional
language constructions and not their absence? You yourself should be
able to emulate anything in lambda-calculus (you can add interpreter
for any extension
On Fri, Oct 24, 2008 at 2:50 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
I'd write it in a separate language, developed for human programmers,
but keep the language with which AI interacts minimalistic, to
understand how it's supposed to grow, and not be burdened by technical
details in the
On Fri, Oct 24, 2008 at 3:04 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
I'd write this specification in language it understands, including a
library that builds more convenient primitives from that foundation if
Okay, so you'd waste a lot of irreplaceable time creating a homebrew
On Fri, Oct 24, 2008 at 3:24 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Again, specifics. What is this specification thing? What kind of
task are to be specified in it? Where does it lead, where does it end?
At the low end, you could look at some of the fitness functions that
have been written
On Fri, Oct 24, 2008 at 3:37 PM, Eric Burton [EMAIL PROTECTED] wrote:
Due to a characteristic paucity of datatypes, all powerful, and a
terse, readable syntax, I usually recommend Python for any project
that is just out the gate. It's my favourite way by far at present to
mangle huge tables.
On Fri, Oct 24, 2008 at 3:48 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
This will be practical once we have a million-fold decrease in the cost of
computation, based on the cost of simulating a brain sized neural network. It
could occur sooner if we discover more efficient
solutions. So far
On Fri, Oct 24, 2008 at 4:09 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
If that allows AI to understand the code, without directly helping it.
In this case teaching it to understand these other languages might be
a better first step.
And to do that you need to give it a specification of those
On Fri, Oct 24, 2008 at 4:10 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Although I've already chosen an implementation language for my Texai project
- Java, I believe that my experience may interest you.
Very much so, thank you.
I moved up one level of procedural abstraction to
On Fri, Oct 24, 2008 at 4:54 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
That's why you need a fault tolerant language that works well with
redundancy. However you still have the inherent limitation that genetic
algorithms can learn no faster than 1 bit per population doubling.
More to the
On Fri, Oct 24, 2008 at 5:26 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
It's a specific problem: jumping right to the code generation to
specification doesn't work, because you'd need too much specification.
At the same time, a human programmer will need much less
specification, so it's a
On Fri, Oct 24, 2008 at 5:37 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
You are describing it as a step one, with writing huge specifications
by hand in formally interpretable language.
I skipped a lot of details because this thread is on programming
languages not my roadmap to AGI :-)
On Fri, Oct 24, 2008 at 5:30 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Interesting! I have a good friend who is also an AGI enthusiast who
followed the same path as you ... a lot of time burned making his own
superior, stripped-down, AGI-customized variant of LISP, followed by a
On Fri, Oct 24, 2008 at 5:37 PM, Mark Waser [EMAIL PROTECTED] wrote:
Instead of arguing language, why don't you argue platform?
Platform is certainly an interesting question. I take the view that
Common Lisp has the advantage of allowing me to defer the choice of
platform. You take the view that
On Fri, Oct 24, 2008 at 5:55 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Composed statements generate Java statements such as an assignment
statement, block statement and so forth. You can see that there is a tree
structure that can be navigated when performing a deductive composition
On Fri, Oct 24, 2008 at 6:08 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
No. Genetic algorithms implement a beam search. It is linear in the best case
and exponential in the worst case. It depends on the shape of the search
It turns out that real search spaces are deceptive, so that
On Fri, Oct 24, 2008 at 6:00 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
If it's not supposed to be a generic language war, that becomes relevant.
Fair point. On the other hand, I'm not yet ready to write a detailed
road map out as far as fix user interface bugs in Firefox. Okay,
here are some
On Fri, Oct 24, 2008 at 6:27 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Not really. Although the distinguishing feature of a Lisp syntax tree is a
nested list, and the fact that my composition framework is also a tree does
not make that framework a Lisp family language.
What do you see as the
On Fri, Oct 24, 2008 at 6:54 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Let me conclude this particular point by agreeing that the Texai program
composition framework is a domain-specific programming language whose
purpose is to express algorithms in tree form, from which Java source
On Fri, Oct 24, 2008 at 6:49 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
I write software for analysis of C/C++ programs to find bugs in them
(dataflow analysis, etc.). Where does AI come into this? I'd really
like to know.
Wouldn't you find AI useful? Aren't there bugs that slip past your
On Fri, Oct 24, 2008 at 7:42 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
This general sentiment doesn't help if I don't know what to do specifically.
Well, given a C/C++ program that does have buffer overrun or stray
pointer bugs, there will typically be a logical proof of this fact;
On Fri, Oct 24, 2008 at 9:49 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Only because it is hard to come up with representations that can be
incrementally modified (don't break when you flip 1 bit).
No, I came up with some representations that didn't break, a
sufficiently large percentage of the
On Tue, Oct 21, 2008 at 4:53 PM, Abram Demski [EMAIL PROTECTED] wrote:
As it happens, this definition of
meaning admits horribly-terribly-uncomputable-things to be described!
(Far worse than the above-mentioned super-omegas.) So, the truth or
falsehood is very much not computable.
On Tue, Oct 21, 2008 at 8:13 PM, Abram Demski [EMAIL PROTECTED] wrote:
The wikipedia article Ben cites is definitely meant for
mathematicians, so I will try to give an example.
Yes indeed -- thanks!
The halting problem asks us about halting facts for a single program.
To make it worse, I
On Wed, Oct 22, 2008 at 3:11 AM, Abram Demski [EMAIL PROTECTED] wrote:
I agree with you there. Our disagreement is about what formal systems
a computer can understand.
I'm also not quite sure what the problem is, but suppose we put it this way:
I think the most useful way to understand the
Split seems reasonable to me. Right now this is the closest there is
to a venue specifically for AGI engineering, whereas there are other
places to discuss AGI philosophy. (For example, AGI philosophy would
presumably be on topic for extropy-chat.)
As for the suggestions that we regress to the
On Wed, Oct 15, 2008 at 5:54 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
The below suggestion is a perfect illustration of why I have given up on the
list: it shows that the AGI list has become, basically, just a vehicle for
the promotion of Ben's projects and preferences, while everything
I'm currently investigating the problem of theorem proving as an AGI
domain, not so much for its own sake as from the following reasoning:
AGI needs to learn procedural knowledge, which means program code; and
reasoning about program code requires formal logic.
From a programming viewpoint,
On Thu, Oct 16, 2008 at 1:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
Goedel and Turing showed that theorem proving is equivalent to solving the
halting problem. So a simple measure of intelligence might be to count the
number of programs that can be decided. But where does that get us?
On Sat, Oct 11, 2008 at 4:37 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Sorry if my response was somehow harsh or inappropriate, it really wasn't
intended as such. Your contributions to the list are valued. These last
few weeks have been rather tough for me in my entrepreneurial role
On Tue, Oct 7, 2008 at 1:47 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
But how do you explain the fact that many of today's top financially
successful companies rely on closed-source software? A recent example
is Google's search engine, which remains closed source.
Nobody paid Google for
A good idea and a euro will get you a cup of coffee. Whoever said you
need to protect ideas is just shilly-shallying you. Ideas have no
market value; anyone capable of taking them up, already has more ideas
of his own than time to implement them. Don't take my word for it,
look around you; do you
On Tue, Oct 7, 2008 at 4:07 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
I was trying to find a way so we can collaborate on one project, but
people don't seem to like the virtual credit idea.
No, no we don't :-)
Even if I go opensource, the number of significant contributors may
Given that Cyc has accomplished far more in the logical encoding of
common sense than any other project, starting with OpenCyc and
building from there would seem to suggest itself as the obvious course
of action. Am I missing something?
On Sat, Sep 27, 2008 at 8:02 PM, YKY (Yan King Yin)
On Mon, Sep 22, 2008 at 1:34 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
On the other hand, if intelligence is in large part a systems phenomenon,
that has to do with the interconnection of reasonably-intelligent components
in a reasonably-intelligent way (as I have argued in many prior
On Fri, Sep 19, 2008 at 11:46 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
So perhaps someone can explain why we need formal knowledge representations
to reason in AI.
Because the biggest open sub problem right now is dealing with
procedural, as opposed to merely declarative or reflexive,
The most plausible explanation I've heard is that humor evolved as a
social weapon for use by a group of low status individuals against a
high status individual. This explains why laughter is involuntarily
contagious, why it mostly occurs in conversation, why children like
watching Tom and Jerry
On Tue, Aug 26, 2008 at 2:38 PM, Mike Tintner [EMAIL PROTECTED] wrote:
The be-all and end-all here though, I presume is similarity. Is it a
logic-al concept? Finding similarities - rough likenesses as opposed to
rational, precise, logicomathematical commonalities - is actually, I would
On Wed, Jul 2, 2008 at 5:31 AM, Terren Suydam [EMAIL PROTECTED] wrote:
Nevertheless, generalities among different instances of complex systems have
been identified, see for instance:
To be sure, but there are also plenty of complex systems
On Mon, Jun 30, 2008 at 8:10 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
My scepticism comes mostly from my personal observation that each complex
systems scientist I come across tends to know about one breed of complex
system, and have a great deal to say about that breed, but when I come
On Mon, Jun 30, 2008 at 8:31 AM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
P.S. The biggest issue that spoiled my joy of reading Permutation
City is that you cannot simulate dynamic systems ( = solve
numerically differential equations) out-of-order, you need to know
time t to compute time t+1
On Fri, Jun 27, 2008 at 6:32 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
Unsupervised learning? This could be really good for looking for strange
things in blood samples. Now, I routinely order a manual differential white
count that requires someone to manually look over the blood cells with a
On Fri, Jun 27, 2008 at 7:38 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Just one gotcha
[two claimed gotchas snipped]
I disagree with your assessment - while I agree present government and
society have problems, as I see it history shows that the development
of technology in general, and
On Thu, Jun 26, 2008 at 6:12 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
Perhaps we could create a short database (maybe only a dozen or so entries)
of sample queries, activities, tasks, etc., that YOU would like to see YOUR
future AGIs performing to earn their electricity.
The approach I
Philosophically, intelligence explosion in the sense being discussed
here is akin to ritual magic - the primary fallacy is the attribution
to symbols alone of powers they simply do not possess.
The argument is that an initially somewhat intelligent program A can
generate a more intelligent
On Mon, Jun 23, 2008 at 3:43 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
We are very inefficient in processing evidence, there is plenty of
room at the bottom in this sense alone. Knowledge doesn't come from
just feeding the system with data - try to read machine learning
textbooks to a chimp,
On Mon, Jun 23, 2008 at 4:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 6:52 PM, Russell Wallace
Indeed, but becoming more efficient at processing evidence is
something that requires being embedded in the environment to which the
On Mon, Jun 23, 2008 at 5:22 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
But it can just work with a static corpus. When you need to figure out
efficient learning, you only need to know a little about the overall
structure of your data (which can be described by a reasonably small
On Mon, Jun 23, 2008 at 5:58 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Mon, Jun 23, 2008 at 8:32 PM, Russell Wallace
Why do you think that? All the evidence is to the contrary - the
examples we have of figuring out efficient learning, from evolution to
childhood play to formal education
On Mon, Jun 23, 2008 at 8:48 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
There are only evolution-built animals, which is a very limited
repertoir of intelligences. You are saying that if no apple tastes
like a banana, therefore no fruit tastes like a banana, even banana.
I'm saying if no
On Mon, Jun 23, 2008 at 11:57 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Oh yes, it can be proven. It requires an extended argument to do so
properly, which I won't attempt here.
Fair enough, I'd be interested to see your attempted proof if you ever
get it written up.
On Mon, May 26, 2008 at 6:26 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Regarding the best language for AGI development, most here know that I'm
using Java in Texai. For skill acquisition, my strategy is to have Texai
acquire a skill by composing a Java program to perform the learned skill. I
On Fri, May 16, 2008 at 4:10 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Wouldn't it be better to provide a super-wiki that could be selected to ONLY
display the professional content if that was what was wanted? How about a
cookie on everyone's computer that could select out porn,
On Sun, May 11, 2008 at 7:45 AM, William Pearson [EMAIL PROTECTED] wrote:
I'm starting to mod qemu (it is not a straightforward process) to add
So if I understand correctly, you're proposing to sandbox candidate
programs by running them in their own virtual PC, with their own
On Sat, May 10, 2008 at 1:14 AM, Stan Nilsen [EMAIL PROTECTED] wrote:
A test of understanding is if one can give a correct *explanation* for any
and all of the possible outputs that it (the thing to understand) produces.
Unfortunately, explanation is just as ambiguous a word as
On Sat, May 10, 2008 at 8:38 AM, William Pearson [EMAIL PROTECTED] wrote:
2) A system similar to automatic programming that takes descriptions
in a formal language given from the outside and potentially malicious
sources and generates a program from them. The language would be
On Sat, May 10, 2008 at 10:10 PM, William Pearson [EMAIL PROTECTED] wrote:
It depends on the system you are designing on. I think you can easily
create as many types of sand box as you want in programming language E
(1) for example. If the principle of least authority (2) is embedded
On Fri, May 9, 2008 at 1:51 AM, Jim Bromer [EMAIL PROTECTED] wrote:
I don't want to get into a quibble fest, but understanding is not
necessarily constrained to prediction.
Indeed, understanding is a fuzzy word that means lots of different
things in different contexts. In the context of
On Sun, May 4, 2008 at 1:55 PM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
If we imagine a brain scanner with perfect resolution of space and time then
we get every information of the brain including the phenomenon of qualia.
But we will not be able to understand it.
That's an empirical
On Mon, May 5, 2008 at 11:01 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
Armchair reasoning is a bad word.
I think it's a rather good one ^.^
It is not an empirical question. It is a question what answers we can get
from science in principle. Therefore it is a philosophical question.
On Tue, Apr 29, 2008 at 2:03 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Here, I think, is a more detailed start to what you're talking about: our
different ways of perceiving and thinking about the world.
Yes all this is absolutely central to solving AGI. What have I left out?
On Tue, Apr 29, 2008 at 12:52 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I claim that we can and do think in each of the 16 modes implied by the above
(and others as well).
That is certainly true...
I think the key to AI is not so much to figure how to operate in any given
On Wed, Apr 30, 2008 at 6:18 AM, J. Andrew Rogers
[EMAIL PROTECTED] wrote:
I will take a third position and point out that there is no real
distinction between these two categories, or at least if there is you are
doing it wrong. One of the amusing and fruitless patterns of behavior in
On Wed, Apr 30, 2008 at 4:11 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
I think the more traditional classification is D = symbolic, S =
pattern recognition/motor, or D = high level, S = low level. The
D-then-S approach has been popular not because it is biologically
plausible, but because
On Wed, Apr 30, 2008 at 5:29 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
By modeling symbolic knowledge in a neural network. I realize it is
horribly inefficient, but at least we have a working model to start
Inefficient is reasonable, but how do you propose to do it at all?
On Wed, Apr 30, 2008 at 4:59 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
Deliberative reasoning can be expressed as processing performed by an
inference circuit, a network that propagates activation and calculates
the result using logic gates. Particular deliberative algorithms can
On Wed, Apr 30, 2008 at 7:41 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Understanding can be as simple as matching terms in two documents, or
something more complex, such as matching a video clip to a text or
audio description. However, there is an incentive to develop
On Wed, Apr 30, 2008 at 8:13 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
That is why you can't learn to multiply numbers in your head like a
calculator (or maybe it's possible with sufficient understanding of
learning dynamics, but was never implemented...). You unfortunately
1 - 100 of 328 matches
Mail list logo