Russel Said:
*Oh, I can figure out how to solve most specific problems. From an AGI
point of view, however, that leaves the question of how those individual
solutions are going to serve as sources of knowledge for a system, rather
than separate specific programs. My answer is to build something
I just came up with an awesome test. Ask someone, anyone you know to name
something really big and obvious around them that they already know the
position of. Tell them to point to it and name it. Practically *every* time,
they will look at it just before or as they are naming it! And it feels
I've been to think lately that the solution to creating a realistic AGI
design is psuedo design. What do I mean? Not simulation... not practical
applications... not extremely detailed implementations. The design would
start at a high level and go deeper into detail as possible.
So, why would this
I accidentally stumbled upon the website of Adaptive AI. I must say, it is
by FAR the best AGI approach and design I have ever seen. As I'm read it
today and yesterday (haven't quite finished it all), I agreed with so much
of what he wrote that I could almost swear that I wrote it myself. He even
Does anyone know of a list, book or links about human reasoning examples?
I'm having such a hard time finding info on this. I don't want to have to
create all the examples myself. but I don't know where to look.
---
agi
Archives:
Has anyone thought about sort of self-assembling nano electrodes or other
nano detectors that could probe the vast majority of neurons and important
structures in a very small brain (such as a gnat brain or a C. Elegans worm,
or even a larger animal)?
It seems to me that this would be a hell of a
I've become extremely fascinated with language acquisition. I am convinced
that we can tease out the algorithms that children use to learn language
from observations like the ones seen in the video link below. I'm about to
start watching the second video, but thought you guys might like watching
I just had this really interesting idea about neuroplasticity as I'm sitting
here listening to a speeches at the Singularity Summit.
I was trying to figure out how neuroplasticity works and why the hell is it
that the brain can find the same patterns in input from completely different
senses. For
This seems to be an overly simplistic view of AGI from a mathematician. It's
kind of funny how people over emphasize what they know or depend on their
current expertise too much when trying to solve new problems.
I don't think it makes sense to apply sanitized and formal mathematical
solutions to
at 10:53 AM, David Jones davidher...@gmail.comwrote:
This seems to be an overly simplistic view of AGI from a mathematician.
It's kind of funny how people over emphasize what they know or depend on
their current expertise too much when trying to solve new problems.
I don't think it makes sense
at 1:40 PM, Jim Bromer jimbro...@gmail.com wrote:
On Wed, Aug 11, 2010 at 10:53 AM, David Jones davidher...@gmail.comwrote:
I don't think it makes sense to apply sanitized and formal mathematical
solutions to AGI. What reason do we have to believe that the problems we
face when developing AGI
Way too pessimistic in my opinion.
On Mon, Aug 9, 2010 at 7:06 PM, John G. Rose johnr...@polyplexic.comwrote:
Aww, so cute.
I wonder if it has a Wi-Fi connection, DHCP's an IP address, and relays
sensory information back to the main servers with all the other Nao's all
collecting personal
Steve,
Capable and effective AI systems would be very helpful at every step of the
research process. Basic research is a major area I think that AGI will be
applied to. In fact, that's exactly where I plan to apply it first.
Dave
On Tue, Aug 10, 2010 at 7:25 AM, Steve Richfield
The think the biggest thing to remember here is that general AI could be
applied to many different problems in parallel by many different people.
They would help with many aspects of the problem solving process, not just a
single one and certainly not just applied to a single experiment/study.
Bob, their are serious issues with such a suggestion.
The biggest issue, is that there is a good chance it wouldn't work because
diseases, including the common cold, have incubation times. So, you may not
have any symptoms at all, yet you can pass it on to other people.
And even if we did know
and through.
*From:* David Jones davidher...@gmail.com
*Sent:* Sunday, August 08, 2010 1:59 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] How To Create General AI Draft2
Mike,
We've argued about this over and over and over. I don't want to repeat
previous arguments to you.
You
You see. This is precisely why I don't want to argue with Mike anymore. it
must be a physical pattern. LOL. Who ever said that patterns must be
physical? This is exactly why you can't see my point of view. You impose
unnecessary restrictions on any possible solution when there really are no
such
I already stated these. read previous emails.
On Mon, Aug 9, 2010 at 8:48 AM, Mike Tintner tint...@blueyonder.co.ukwrote:
PS Examples of nonphysical patterns AND how they are applicable to visual
AGI.?
*From:* David Jones davidher...@gmail.com
*Sent:* Monday, August 09, 2010 1:34 PM
useful for another task.
The idea that chairs cannot be recognized because they come in all shapes,
sizes and structures is just wrong.
Dave
On Mon, Aug 9, 2010 at 8:47 AM, Mike Tintner tint...@blueyonder.co.ukwrote:
Examples of nonphysical patterns?
*From:* David Jones davidher...@gmail.com
- and must embrace many
diverse forms that strings may be shaped into.
*From:* David Jones davidher...@gmail.com
*Sent:* Monday, August 09, 2010 2:13 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] How To Create General AI Draft2
Mike,
Quoting a previous email:
QUOTE
In fact
percentage of what's needed to make a human-level, vaguely human-like
AGI I.e. I don't agree that solving vision and the vision-cognition
bridge is *such* a huge part of AGI, though it's certainly a nontrivial
percentage...
-- Ben G
On Fri, Aug 6, 2010 at 4:44 PM, David Jones davidher
Ben,
Comments below.
On Mon, Aug 9, 2010 at 12:00 PM, Ben Goertzel b...@goertzel.org wrote:
The human visual system doesn't evolve like that on the fly. This can be
proven by the fact that we all see the same visual illusions. We all exhibit
the same visual limitations in the same way.
I've decided to go. I was wondering if anyone else here is.
Dave
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
. He/It is a
freeform designer. You have to start thinking outside the
box/brick/fundamental unit.
*From:* David Jones davidher...@gmail.com
*Sent:* Sunday, August 08, 2010 5:12 AM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] How To Create General AI Draft2
Mike,
I took your comments
.
*From:* David Jones davidher...@gmail.com
*Sent:* Sunday, August 08, 2010 1:59 PM
To: agi
Subject: Re: [agi] How To Create General AI Draft2
Mike,
We've argued about this over and over and over. I don't want to repeat
previous arguments to you.
You have no proof that the world cannot
Hey Ben,
Faster, cheaper, and more robust 3D modeling for the movie industry. The
modeling allows different sources of video content to be extracted from
scenes, manipulated and mixed with others.
The movie industry has the money and motivation to extract data from images.
Making it easier, more
PM, David Jones davidher...@gmail.comwrote:
Hey Guys,
I've been working on writing out my approach to create general AI to
share and debate it with others in the field. I've attached my second draft
of it in PDF format, if you guys are at all interested. It's still a work in
progress
Mike,
I took your comments into consideration and have been updating my paper to
make sure these problems are addressed.
See more comments below.
On Fri, Aug 6, 2010 at 8:15 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
1) You don't define the difference between narrow AI and AGI - or make
On Fri, Aug 6, 2010 at 7:37 PM, Jim Bromer jimbro...@gmail.com wrote:
On Wed, Aug 4, 2010 at 9:27 AM, David Jones davidher...@gmail.com wrote:
*So, why computer vision? Why can't we just enter knowledge manually?
*
a) The knowledge we require for AI to do what we want is vast and complex
into them.
On Wed, Aug 4, 2010 at 9:10 AM, Jim Bromer jimbro...@gmail.com wrote:
On Tue, Aug 3, 2010 at 11:52 AM, David Jones davidher...@gmail.comwrote:
I've suddenly realized that computer vision of real images is very much
solvable and that it is now just a matter of engineering
shifts from absolute values to rates of change.
Steve
===
On Tue, Aug 3, 2010 at 8:52 AM, David Jones davidher...@gmail.com wrote:
I've suddenly realized that computer vision of real images is very much
solvable and that it is now just a matter of engineering. I was so stuck
, but it fits
the neat definition better than it meets the scruffy definition. Scruffy
has more to do with people-programmed ad hoc approaches (like most of AGI),
which I agree are a waste of time.
Steve
On Wed, Aug 4, 2010 at 12:43 PM, David Jones davidher...@gmail.comwrote:
Steve,
I
steve.richfi...@gmail.comwrote:
David
On Wed, Aug 4, 2010 at 1:16 PM, David Jones davidher...@gmail.com wrote:
3) requires manually created training data, which is a major problem.
Where did this come from. Certainly, people are ill equipped to create
dP/dt type data. These would have to come
On Wed, Aug 4, 2010 at 6:17 PM, Steve Richfield
steve.richfi...@gmail.comwrote:
David,
On Wed, Aug 4, 2010 at 1:45 PM, David Jones davidher...@gmail.com wrote:
Understanding what you are trying to accomplish and how you want the
system to work comes first, not math.
It's all the same
How about you go to war yourself or send your children. I'd rather send a
robot. It's safer for both the soldier and the people on the ground because
you don't have to shoot first, ask questions later.
And you're right, we shouldn't monitor anyone. We should just allow
terrorists to talk openly
Abram Wrote:
I take this as evidence that there is a very strong mental landscape...
if you go in a particular direction there is a natural series of landmarks,
including both great ideas and pitfalls that everyone runs into. (Different
people take different amounts of time to climb out of
:) Intelligence isn't limited to higher cognitive functions. One could say
a virus is intelligent or alive because it can replicate itself.
Intelligence is not just one function or ability, it can be many different
things. But mostly, for us, it comes down to what the system can accomplish
for
Sure. Thanks Arthur.
On Sun, Jul 25, 2010 at 10:42 AM, A. T. Murray menti...@scn.org wrote:
David Jones wrote:
Arthur,
Thanks. I appreciate that. I would be happy to aggregate some of those
things. I am sometimes not good at maintaining the website because I get
bored of maintaining
Deepak,
I have some insight on this question. There was a study regarding change
blindness. One of the study's famous experiments was having a person ask for
directions on a college campus. Then in the middle of this, a door would
pass between the person asking directions and the student giving
clues in to how the
brain compresses and uses the relevant information while neglecting the
irrelevant information. But as Anast has demonstrated, the brain does need
priming inorder to decide what is relevant and irrelevant. :)
Cheers,
Deepak
On Sun, Jul 25, 2010 at 5:34 AM, David Jones
lol. thanks Jim :)
On Thu, Jul 22, 2010 at 10:08 PM, Jim Bromer jimbro...@gmail.com wrote:
I have to say that I am proud of David Jone's efforts. He has really
matured during these last few months. I'm kidding but I really do respect
the fact that he is actively experimenting. I want to
of for measuring the
predictiveness? I can think of a few different possibilities (such as
measuring number incorrect vs measuring fraction incorrect, et cetera) but
I'm wondering which variations you consider significant/troublesome/etc.
--Abram
On Thu, Jul 22, 2010 at 7:12 PM, David Jones
, David Jones davidher...@gmail.comwrote:
It's certainly not as simple as you claim. First, assigning a probability
is not always possible, nor is it easy. The factors in calculating that
probability are unknown and are not the same for every instance. Since we do
not know what combination
Matt,
Any method must deal with similar, if not the same, ambiguities. You need to
show how neural nets solve this problem or how they solve agi goals while
completely skipping the problem. Until then, it is not a successful method.
Dave
On Jul 24, 2010 7:18 PM, Matt Mahoney
Check this out!
The title Space and time, not surface features, guide object persistence
says it all.
http://pbr.psychonomic-journals.org/content/14/6/1199.full.pdf
Over just the last couple days I have begun to realize that they are so
right. My idea before of using high frame rates is also
that the brain uses hierarchical features LOL
Dave
On Sat, Jul 24, 2010 at 11:52 PM, David Jones davidher...@gmail.com wrote:
Check this out!
The title Space and time, not surface features, guide object persistence
says it all.
http://pbr.psychonomic-journals.org/content/14/6/1199.full.pdf
Over
Yes. I think I may have discovered the keys to crack this puzzle wide open.
The brain seems to use simplistic heuristics for depth perception and
surface bounding. Once it has that, it can apply the spaciotemporal
heuristic I mentioned in other emails to identify and track an object, which
allows
An Update
I think the following gets to the heart of general AI and what it takes to
achieve it. It also provides us with evidence as to why general AI is so
difficult. With this new knowledge in mind, I think I will be much more
capable now of solving the problems and making it work.
I've
Because simpler is not better if it is less predictive.
On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski abramdem...@gmail.com wrote:
Jim,
Why more predictive *and then* simpler?
--Abram
On Thu, Jul 22, 2010 at 11:49 AM, David Jones davidher...@gmail.comwrote:
An Update
I think
simpler when it
is smaller, then you can do the same thing without a program. I don't think
it makes any sense to do it this way.
It is not that simple. If it was, we could solve a large portion of agi
easily.
On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney matmaho...@yahoo.com wrote:
David Jones
Training data is not available in many real problems. I don't think training
data should be used as the main learning mechanism. It likely won't solve
any of the problems.
On Jul 21, 2010 2:52 AM, deepakjnath deepakjn...@gmail.com wrote:
Yes we could do a 4x4 tic tac toe game like this in a PC.
not really.
On Sun, Jul 18, 2010 at 9:41 AM, deepakjnath deepakjn...@gmail.com wrote:
Yes, but is there a competition like the XPrize or something that we can
work towards. ?
On Sun, Jul 18, 2010 at 6:40 PM, Panu Horsmalahti nawi...@gmail.comwrote:
2010/7/18 deepakjnath
If you can't convince someone, clearly something is wrong with it. I don't
think a test is the right way to do this. Which is why I haven't commented
much. When you understand how to create AGI, it will be obvious that it is
AGI or that it is what you intend it to be. You'll then understand how
Deepak,
I think you would be much better off focusing on something more practical.
Understanding a movie and all the myriad things going on, their
significance, etc... that's AI complete. There is no way you are going to
get there without a hell of a lot of steps in between. So, you might as well
Ian,
Although most people see natural language as one of the most important parts
of AGI, if you think about it carefully, you'll realize that solving natural
language could be done with sufficient knowledge of the world and sufficient
ability to learn this knowledge automatically. That's why i
knowledge to say that one hypothesis is better than another in
the vast majority of cases. The AI doesn't have sufficient *reason* to think
that the right hypothesis is better than others. The only way to give it
that sufficient reason is to give it sufficient knowledge.
Dave
2010/7/18 David Jones
This is actually a great example of why we should not try to write AGI as
something able to solve any possible problem generally. We, strong ai
agents, are not able to understand this sentence without quite a lot more
information. Likewise, we shouldn't expect a general AI to try many
likely to for a v. v. long time.
*From:* David Jones davidher...@gmail.com
*Sent:* Friday, July 16, 2010 4:35 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] NL parsing
This is actually a great example of why we should not try to write AGI as
something able to solve any possible
(although not as simple as black squares :)
) and work my way up again.
Dave
On Wed, Jul 14, 2010 at 12:59 PM, David Jones davidher...@gmail.com wrote:
Actually, I just realized that there is a way to included inductive
knowledge and experience into this algorithm. Inductive knowledge
) of the description of the hypothesis. The value is language dependent,
so this method is not perfect.
-- Matt Mahoney, matmaho...@yahoo.com
--
*From:* David Jones davidher...@gmail.com
*To:* agi agi@v2.listbox.com
*Sent:* Thu, July 15, 2010 10:22:44 AM
*Subject:* Re
, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.comwrote:
What do you mean by definitive events?
I was just trying to find a way to designate obsverations that would be
reliably obvious to a computer program. This has something to do with the
assumptions that you are using
in
a body becomes.
*From:* David Jones davidher...@gmail.com
*Sent:* Thursday, July 15, 2010 5:54 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] How do we Score Hypotheses?
Jim,
even that isn't an obvious event. You don't know what is background and
what is not. You don't even know
What do you mean by definitive events?
I guess the first problem I see with my approach is that the movement of the
window is also a hypothesis. I need to analyze it in more detail and see how
the tree of hypotheses affects the hypotheses regarding the es on the
windows.
What I believe is that
of hypotheses, conflicts and unexpected
observations until we find a good hypothesis. Something like that. I'm
attempting to construct an algorithm for doing this as I analyze specific
problems.
Dave
On Wed, Jul 14, 2010 at 10:22 AM, David Jones davidher...@gmail.com wrote:
What do you mean
Abram,
Thanks for the clarification Abram. I don't have a single way to deal with
uncertainty. I try not to decide on a method ahead of time because what I
really want to do is analyze the problems and find a solution. But, at the
same time. I have looked at the probabilistic approaches and they
Mike, you are so full of it. There is a big difference between *can* and
*don't*. You have no proof that programs can't handle anything you say that
can't.
On Tue, Jul 13, 2010 at 2:36 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
The first thing is to acknowledge that programs *don't*
Mike,
see below.
On Tue, Jul 13, 2010 at 2:36 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
The first thing is to acknowledge that programs *don't* handle concepts -
if you think they do, you must give examples.
The reasons they can't, as presently conceived, is
a) concepts encase a
It includes a (short-ish?) section comparing the pros/cons of MDL and
Bayesianism, and examining some of the mathematical linkings between them--
with the aim of showing that MDL is a broader principle. I disagree there,
of course. :)
--Abram
On Tue, Jul 13, 2010 at 10:01 AM, David Jones davidher
I've been trying to figure out how to score hypotheses. Do you guys have any
constructive ideas about how to define the way you score hypotheses like
these a little better? I'll define the problem below in detail. I know Abram
mentioned MDL, which I'm about to look into. Does that even apply to
with specified kinds of actions and objects - they cannot
deal with unspecified kinds of actions and objects, period. You won't
produce any actual examples to the contrary.
*From:* David Jones davidher...@gmail.com
*Sent:* Tuesday, July 13, 2010 8:00 PM
*To:* agi agi@v2.listbox.com
*Subject
Thanks Abram,
I know that probability is one approach. But there are many problems with
using it in actual implementations. I know a lot of people will be angered
by that statement and retort with all the successes that they have had using
probability. But, the truth is that you can solve the
Mike,
Using the image itself as a template to match is possible. In fact it has
been done before. But it suffers from several problems that would also need
solving.
1) Images are 2D. I assume you are also referring to 2D outlines. Real
objects are 3D. So, you're going to have to infer the shape
I accidentally pressed something and it sent it early... this is a finished
version:
Mike,
Using the image itself as a template to match is possible. In fact it has
been done before. But it suffers from several problems that would also need
solving.
1) Images are 2D. I assume you are also
* in theory
absolute setform=freeform nonsense, you will in practice always, always
stick to setform objects. Some part of you knows the v.obvious truth ).
*From:* David Jones davidher...@gmail.com
*Sent:* Saturday, July 10, 2010 3:51 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] Re
On Sat, Jul 10, 2010 at 5:02 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
Dave:You can't solve the problems with your approach either
This is based on knowledge of what examples? Zero?
It is based on the fact that you have refused to show how you deal with
uncertainty. You haven't even
Mike,
On Thu, Jul 8, 2010 at 6:52 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
Isn't the first problem simply to differentiate the objects in a scene?
Well, that is part of the movement problem. If you say something moved, you
are also saying that the objects in the two or more video
the
world will be like. They aren't able to learn about any world. They are
optimized to configure their brains for this world.
*From:* David Jones davidher...@gmail.com
*Sent:* Friday, July 09, 2010 1:56 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
as a more primary sense, well before it got to vision.
*Or perhaps it may prove better to start with robot snakes/bodies or
somesuch.
*From:* David Jones davidher...@gmail.com
*Sent:* Friday, July 09, 2010 3:22 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] Re: Huge Progress
Although I haven't studied Solomonoff induction yet, although I plan to read
up on it, I've realized that people seem to be making the same mistake I
was. People are trying to find one silver bullet method of induction or
learning that works for everything. I've begun to realize that its OK if
a single silver bullet, the more we will just break against the fact that
none exists.
On Fri, Jul 9, 2010 at 11:35 AM, David Jones davidher...@gmail.com wrote:
Although I haven't studied Solomonoff induction yet, although I plan to
read up on it, I've realized that people seem to be making
certainly didn't have one.
*From:* David Jones davidher...@gmail.com
*Sent:* Friday, July 09, 2010 4:20 PM
*To:* agi agi@v2.listbox.com
*Subject:* Re: [agi] Re: Huge Progress on the Core of AGI
Mike,
Please outline your algorithm for fluid schemas though. It will be clear
when you do
less general solutions for
multiple problem types and environments. The best way to do this is to
choose representative case studies to solve and make sure the solutions are
truth-tropic and justified for the environments they are to be applied.
Dave
On Sun, Jun 27, 2010 at 1:31 AM, David Jones
puddle in the middle of the desert :).
Dave
On Thu, Jul 8, 2010 at 3:17 PM, David Jones davidher...@gmail.com wrote:
I've learned something really interesting today. I realized that general
rules of inference probably don't really exists. There is no such thing as
complete generality
visual processing
algorithm. Learning algorithms suffer similar environment-dependence, but
(by their nature) not as severe...
--Abram
On Thu, Jul 8, 2010 at 3:17 PM, David Jones davidher...@gmail.com wrote:
I've learned something really interesting today. I realized that general
rules
a specified portion of video given a different portion. Do
you object to this approach?
--Abram
On Thu, Jul 8, 2010 at 5:30 PM, David Jones davidher...@gmail.com wrote:
It may not be possible to create a learning algorithm that can learn how
to generally process images and other general AGI problems
narrow AI is a term that describes the solution to a problem, not the
problem. It is a solution with a narrow scope. General AI on the other hand
should have a much larger scope than narrow ai and be able to handle
unforseen circumstances.
What I don't think you realize is that open sets can be
Nice Occam's Razor argument. I understood it simply because I knew there are
always an infinite number of possible explanations for every observation
that are more complicated than the simplest explanation. So, without a
reason to choose one of those other interpretations, then why choose it? You
If anyone has any knowledge of or references to the state of the art in
explanation-based reasoning, can you send me keywords or links? I've read
some through google, but I'm not really satisfied with anything I've found.
Thanks,
Dave
On Sun, Jun 27, 2010 at 1:31 AM, David Jones davidher
29, 2010 at 10:29 AM, Matt Mahoney matmaho...@yahoo.com wrote:
David Jones wrote:
If anyone has any knowledge of or references to the state of the art in
explanation-based reasoning, can you send me keywords or links?
The simplest explanation of the past is the best predictor of the future
the verbal/letter signals involved in NLP are.
What you need to do - what anyone in your situation with anything like your
asprations needs to do - is to hook up with a roboticist. Everyone here
should be doing that.
*From:* David Jones davidher...@gmail.com
*Sent:* Tuesday, June 29, 2010 5
at 2:51 PM, Matt Mahoney matmaho...@yahoo.com wrote:
David Jones wrote:
I wish people understood this better.
For example, animals can be intelligent even though they lack language
because they can see. True, but an AGI with language skills is more useful
than one without.
And yes, I realize
the purpose of text is to convey something. It has to be interpreted. who
cares about predicting the next word if you can't interpret a single bit of
it.
On Tue, Jun 29, 2010 at 3:43 PM, David Jones davidher...@gmail.com wrote:
People do not predict the next words of text. We anticipate
. These examples don't really show
anything.
Dave
On Tue, Jun 29, 2010 at 3:15 PM, Matt Mahoney matmaho...@yahoo.com wrote:
David Jones wrote:
I really don't think this is the right way to calculate simplicity.
I will give you an example, because examples are more convincing than
proofs
applicable to general vision?
Get thee to a roboticist, make contact with the real world.
Get yourself to a psychologist so that they can show you how flawed your
reasoning is. Fallacy upon fallacy. You are not in touch with reality.
*From:* David Jones davidher...@gmail.com
*Sent:* Tuesday
Scratch my statement about it being useless :) It's useful, but no where
near sufficient for AGI like understanding.
On Tue, Jun 29, 2010 at 4:58 PM, David Jones davidher...@gmail.com wrote:
notice how you said *context* of the conversation. The context is the real
world, and is completely
Mike,
Alive vs. dead? As I've said before, there is no actual difference. It is
not a qualitative difference that makes something alive or dead. It is a
quantitative difference. They are both controlled by physics. I don't mean
the nice clean physics rules that we approximate things with, I mean
Yeah. I forgot to mention that robots are not aalive yet could act
indistinguishably from what is alive. The concept of alive is likely
something that requires inductive type reasoning and generalization to
learn. Categorization, similarity analysis, etc could assist in making such
distinctions as
In case anyone missed it... Problems are not AGI. Solutions are. And AGI
is not the right adjective anyway. The correct word is general. In other
words, generally applicable to other problems. I repeat, Mike, you are *
wrong*. Did anyone miss that?
To recap, it has nothing to do with what problem
my
strategy to the alternatives.
Dave
On Mon, Jun 28, 2010 at 3:56 PM, David Jones davidher...@gmail.com wrote:
That does not have to be the case. Yes, you need to know what problems you
might have in more complicated domains to avoid developing completely
useless theories on toy problems
. It is
great at certain narrow applications, but no where near where it needs to be
for AGI.
Dave
On Mon, Jun 28, 2010 at 4:00 PM, Russell Wallace
russell.wall...@gmail.comwrote:
On Mon, Jun 28, 2010 at 8:56 PM, David Jones davidher...@gmail.com
wrote:
Having experience with the full problem
1 - 100 of 114 matches
Mail list logo