Re: [agi] [GoogleTech Talk] Case-based reasoning for game AI

2008-12-30 Thread Daniel Allen
Thanks.  It's a good one :)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Steve Richfield
J. Andrew,

On 12/30/08, J. Andrew Rogers  wrote:
>
>
> On Dec 30, 2008, at 12:51 AM, Steve Richfield wrote:
>
>> On a side note, there is the "clean" math that people learn on their way
>> to a math PhD, and then there is the "dirty" math that governs physical
>> systems. Dirty math is fraught with all sorts of multi-valued functions,
>> fundamental uncertainties, etc. To work in the world of "dirty" math, you
>> must escape the notation and figure out what the equation is all about, and
>> find some way of representing THAT, which may well not involve simple
>> numbers on the real-number line, or even on the complex number plane.
>>
>
>
> What does "dirty math" really mean?  There are engineering disciplines
> essentially *built* on solving equations with gross internal inconsistencies
> and unsolvable systems of differential equations. The modern world gets
> along pretty admirably suffering the very profitable and ubiquitous
> consequences of their quasi-solutions to those problems.  But it is still a
> lot of hairy notational math and equations, just applied in a different
> context that has function uncertainty as an assumption. The unsolvability
> does not lead them to pull numbers out of a hat, they have sound methods for
> brute-forcing fine approximations across a surprisingly wide range of
> situations. When the "clean" mathematical methods do not apply, there are
> other different (not "dirty") mathematical methods that you can use.


The "dirty" line is rather fuzzy, but you know you've crossed it when
instead of locations, things have "probability spaces", when you are trying
to numerically solve systems of simultaneous equations and it always seems
that at least one of them produces NANs, etc. Algebra was designed for the
"real world" as we experience it, and works for most engineering problems,
but often runs aground in theoretical physics, at least until you abandon
the idea of a 1:1 correspondence between states and variables.

Indeed, I have sometimes said the only real education I ever got in AI was
> spending years studying an engineering discipline that is nothing but
> reducing very complex systems of pervasively polluted data and nonsense
> equations to precise predictive models where squeezing out an extra 1%
> accuracy meant huge profit.  None of it is directly applicable, the value
> was internalizing that kind of systems perspective and thinking about every
> complex systems problem in those terms, with a lot of experience
> algorithmically producing predictive models from them. It was different but
> it was still ordinary math, just math appropriate for the particular
> problem.


Bingo! You have to "tailor" the techniques to the problem - more than just
"solving the equations", but often the representation of quantities needs to
be in some sort of multivalued form.

The only thing you could really say about it was that it produced a lot of
> great computer scientists and no mathematicians to speak of (an odd bias,
> that).


Yea, but I'd bet that you got pretty good at numerical analysis  ;-)

  With this as background, as I see it, hypercomputation is just another
>> attempt to evade dealing with some hard mathematical problems.
>>
>
>
> The definition of "hypercomputation" captures some very specific
> mathematical concepts that are not captured in other conceptual terms.  I do
> not see what is being evaded,


... which is where the break probably is. If someone is going to claim that
Turing machines are incapable of doing something, then it seems important to
state just what that "something" is.

since it is more like pointing out the obvious with respect to certain
> limits implied by the conventional Turing model.


I wonder if we aren't really talking about analog computation (i.e.
computing with analogues, e.g. molecules) here? Analog computers have been
handily out-computing digital computers for a long time. One analog computer
that produced tide tables, now in a glass case at the NOAA headquarters,
performed well for ~100 years until it was finally replaced by a large CDC
computer - and probably now with a PC. Some magnetic systems engineers still
resort to fish tank analogs rather than deal with software.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
I'm heading off on a vacation for 4-5 days [with occasional email access]
and will probably respond to this when i get back ... just wanted to let you
know I'm not ignoring the question ;-)

ben

On Tue, Dec 30, 2008 at 1:26 PM, William Pearson wrote:

> 2008/12/30 Ben Goertzel :
> >
> > It seems to come down to the simplicity measure... if you can have
> >
> > simplicity(Turing program P that generates lookup table T)
> > <
> > simplicity(compressed lookup table T)
> >
> > then the Turing program P can be considered part of a scientific
> > explanation...
> >
>
> Can you clarify what type of language this is in? You mention
> L-expressions however that is not very clear what that means. lambda
> expressions I'm guessing.
>
> If you start with a language that has infinity built in to its fabric,
> TMs will be simple, however if you started with a language that only
> allowed FSM to be specified e.g. regular expressions, you wouldn't be
> able to simply specify TMs, as you need to represent an infinitely
> long tape in order to define a TM.
>
> Is this analogous to the argument at the end of section 3? It is that
> bit that is the least clear as far as I am concerned.
>
>  Will
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

"I intend to live forever, or die trying."
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread William Pearson
2008/12/30 Ben Goertzel :
>
> It seems to come down to the simplicity measure... if you can have
>
> simplicity(Turing program P that generates lookup table T)
> <
> simplicity(compressed lookup table T)
>
> then the Turing program P can be considered part of a scientific
> explanation...
>

Can you clarify what type of language this is in? You mention
L-expressions however that is not very clear what that means. lambda
expressions I'm guessing.

If you start with a language that has infinity built in to its fabric,
TMs will be simple, however if you started with a language that only
allowed FSM to be specified e.g. regular expressions, you wouldn't be
able to simply specify TMs, as you need to represent an infinitely
long tape in order to define a TM.

Is this analogous to the argument at the end of section 3? It is that
bit that is the least clear as far as I am concerned.

  Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread John G. Rose
The main point being consciousness effects multi-agent collective
intelligence. Theoretically it could be used to improve a goal of
compression since compression and intelligence are related though
compression seems more narrow, or attempting to compress that is.

 

Either way this is not nonsense. Contemporary compression has yet to get
very close to max theoretical so exploring the space of potential
mechanisms, especially intelligence related facets like consciousness and
multi-agent consciousness can be potential candidates for a new hack? I
think though that attempting to get close to max compression is not as
related to a goal of an efficient compression... 

 

John

 

From: Matt Mahoney [mailto:matmaho...@yahoo.com] 
Sent: Tuesday, December 30, 2008 8:47 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Universal intelligence test benchmark

 


John,
So if consciousness is important for compression, then I suggest you write
two compression programs, one conscious and one not, and see which one
compresses better. 

Otherwise, this is nonsense.

-- Matt Mahoney, matmaho...@yahoo.com

--- On Tue, 12/30/08, John G. Rose  wrote:

From: John G. Rose 
Subject: RE: [agi] Universal intelligence test benchmark
To: agi@v2.listbox.com
Date: Tuesday, December 30, 2008, 9:46 AM

If the agents were p-zombies or just not conscious they would have different
motivations.

 

Consciousness has properties of communication protocol and effects
inter-agent communication. The idea being it enhances agents' existence and
survival. I assume it facilitates collective intelligence, generally. For a
multi-agent system with a goal of compression or prediction the agent
consciousness would have to be catered.  So introducing - 

Consciousness of X is: the idea or feeling that X is correlated with
"Consciousness of X"
to the agents would give them more "glue" if they expended that
consciousness on one another. The communications dynamics of the system
would change verses a similar non-conscious multi-agent system.

 

John

 

From: Ben Goertzel [mailto:b...@goertzel.org] 
Sent: Monday, December 29, 2008 2:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Universal intelligence test benchmark

 


Consciousness of X is: the idea or feeling that X is correlated with
"Consciousness of X"

;-)

ben g

On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney <
 matmaho...@yahoo.com> wrote:

--- On Mon, 12/29/08, John G. Rose < 
johnr...@polyplexic.com> wrote:

> > What does consciousness have to do with the rest of your argument?
> >
>
> Multi-agent systems should need individual consciousness to
> achieve advanced
> levels of collective intelligence. So if you are
> programming a multi-agent
> system, potentially a compressor, having consciousness in
> the agents could
> have an intelligence amplifying effect instead of having
> non-conscious
> agents. Or some sort of primitive consciousness component
> since higher level
> consciousness has not really been programmed yet.
>
> Agree?

No. What do you mean by "consciousness"?

Some people use "consciousness" and intelligence" interchangeably. If that
is the case, then you are just using a circular argument. If not, then what
is the difference?


-- Matt Mahoney,   matmaho...@yahoo.com




  _  


agi |   Archives
 |
 Modify Your Subscription

  

  _  


agi |   Archives
 |
 Modify Your Subscription

  

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Ben Goertzel
It seems to come down to the simplicity measure... if you can have

simplicity(Turing program P that generates lookup table T)
<
simplicity(compressed lookup table T)

then the Turing program P can be considered part of a scientific
explanation...


On Tue, Dec 30, 2008 at 10:02 AM, William Pearson wrote:

> 2008/12/29 Ben Goertzel :
> >
> > Hi,
> >
> > I expanded a previous blog entry of mine on hypercomputation and AGI into
> a
> > conference paper on the topic ... here is a rough draft, on which I'd
> > appreciate commentary from anyone who's knowledgeable on the subject:
> >
> > http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf
> >
> I'm still a bit fuzzy about your argument. So I am going to ask a
> question to hopefully clarify things somewhat.
>
> Couldn't you use similar arguments to say that we can't use science to
> distinguish between finite state machines and Turing machines? And
> thus question the usefulness of Turing Machines for science? As if you
> are talking about a finite data sets these can always be represented
> by a  compressed giant look up table.
>
>  Will
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

"I intend to live forever, or die trying."
-- Groucho Marx



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread Matt Mahoney
John,
So if consciousness is important for compression, then I suggest you write two 
compression programs, one conscious and one not, and see which one compresses 
better. 

Otherwise, this is nonsense.

-- Matt Mahoney, matmaho...@yahoo.com

--- On Tue, 12/30/08, John G. Rose  wrote:
From: John G. Rose 
Subject: RE: [agi] Universal intelligence test benchmark
To: agi@v2.listbox.com
Date: Tuesday, December 30, 2008, 9:46 AM




 
 






If the agents were p-zombies or just not conscious they would have
different motivations. 

   

Consciousness has properties of communication protocol and effects
inter-agent communication. The idea being it enhances agents' existence and
survival. I assume it facilitates collective intelligence, generally. For a
multi-agent system with a goal of compression or prediction the agent
consciousness would have to be catered.  So introducing -  

Consciousness of X is: the idea or feeling that X is
correlated with "Consciousness of X"

to
the agents would give them more "glue" if they expended that consciousness
on one another. The communications dynamics of the system would change 
verses
a similar non-conscious multi-agent system. 

   

John 

   







From: Ben Goertzel
[mailto:b...@goertzel.org] 

Sent: Monday, December 29, 2008 2:30 PM

To: agi@v2.listbox.com

Subject: Re: [agi] Universal intelligence test benchmark 





   



Consciousness of X is: the idea or feeling that X is correlated with
"Consciousness of X"



;-)



ben g 



On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney  wrote: 



--- On Mon, 12/29/08, John G.
Rose 
wrote: 





> > What does
consciousness have to do with the rest of your argument?

> >

>

> Multi-agent systems should need individual consciousness to

> achieve advanced

> levels of collective intelligence. So if you are

> programming a multi-agent

> system, potentially a compressor, having consciousness in

> the agents could

> have an intelligence amplifying effect instead of having

> non-conscious

> agents. Or some sort of primitive consciousness component

> since higher level

> consciousness has not really been programmed yet.

>

> Agree? 



No. What do you mean by "consciousness"?



Some people use "consciousness" and intelligence"
interchangeably. If that is the case, then you are just using a circular
argument. If not, then what is the difference? 





-- Matt Mahoney, matmaho...@yahoo.com







 













  

  
  agi | Archives

 | Modify
 Your Subscription


  

  


 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread William Pearson
2008/12/29 Ben Goertzel :
>
> Hi,
>
> I expanded a previous blog entry of mine on hypercomputation and AGI into a
> conference paper on the topic ... here is a rough draft, on which I'd
> appreciate commentary from anyone who's knowledgeable on the subject:
>
> http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf
>
I'm still a bit fuzzy about your argument. So I am going to ask a
question to hopefully clarify things somewhat.

Couldn't you use similar arguments to say that we can't use science to
distinguish between finite state machines and Turing machines? And
thus question the usefulness of Turing Machines for science? As if you
are talking about a finite data sets these can always be represented
by a  compressed giant look up table.

 Will


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


RE: [agi] Universal intelligence test benchmark

2008-12-30 Thread John G. Rose
If the agents were p-zombies or just not conscious they would have different
motivations.

 

Consciousness has properties of communication protocol and effects
inter-agent communication. The idea being it enhances agents' existence and
survival. I assume it facilitates collective intelligence, generally. For a
multi-agent system with a goal of compression or prediction the agent
consciousness would have to be catered.  So introducing - 

Consciousness of X is: the idea or feeling that X is correlated with
"Consciousness of X"
to the agents would give them more "glue" if they expended that
consciousness on one another. The communications dynamics of the system
would change verses a similar non-conscious multi-agent system.

 

John

 

From: Ben Goertzel [mailto:b...@goertzel.org] 
Sent: Monday, December 29, 2008 2:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Universal intelligence test benchmark

 


Consciousness of X is: the idea or feeling that X is correlated with
"Consciousness of X"

;-)

ben g

On Mon, Dec 29, 2008 at 4:23 PM, Matt Mahoney  wrote:

--- On Mon, 12/29/08, John G. Rose  wrote:

> > What does consciousness have to do with the rest of your argument?
> >
>
> Multi-agent systems should need individual consciousness to
> achieve advanced
> levels of collective intelligence. So if you are
> programming a multi-agent
> system, potentially a compressor, having consciousness in
> the agents could
> have an intelligence amplifying effect instead of having
> non-conscious
> agents. Or some sort of primitive consciousness component
> since higher level
> consciousness has not really been programmed yet.
>
> Agree?

No. What do you mean by "consciousness"?

Some people use "consciousness" and intelligence" interchangeably. If that
is the case, then you are just using a circular argument. If not, then what
is the difference?


-- Matt Mahoney, matmaho...@yahoo.com








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] [Science Daily] Our Unconscious Brain Makes The Best Decisions Possible

2008-12-30 Thread Vladimir Nesov
On Tue, Dec 30, 2008 at 12:44 AM, Kaj Sotala  wrote:
> On Mon, Dec 29, 2008 at 10:15 PM, Lukasz Stafiniak  wrote:
>> http://www.sciencedaily.com/releases/2008/12/081224215542.htm
>>
>> Nothing surprising ;-)
>
> So they have a result saying that we're good at subconsciously
> estimating the direction in which dots on a screen are moving in.
> Apparently this can be safely generalized into "Our Unconscious Brain
> Makes The Best Decisions Possible (implied: "always")".
>
> You're right, nothing surprising. Just the kind of unfounded,
> simplistic hyperbole I'd expect from your average science reporter.
> ;-)
>

Here is a critique of the article:

http://neurocritic.blogspot.com/2008/12/deal-no-deal-or-dots.html

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] [GoogleTech Talk] Case-based reasoning for game AI

2008-12-30 Thread Lukasz Stafiniak
The lecture is actually about more than just CBR.
I recommend watching if you're bored, this is really entertaining :-)

http://machineslikeus.com/news/video-case-based-reasoning-game-ai

Bits seem similar to what Novamente is working on.
Ambitious, but with engineering rather than AGI-focused spirit.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread J. Andrew Rogers


On Dec 30, 2008, at 12:51 AM, Steve Richfield wrote:
On a side note, there is the "clean" math that people learn on their  
way to a math PhD, and then there is the "dirty" math that governs  
physical systems. Dirty math is fraught with all sorts of multi- 
valued functions, fundamental uncertainties, etc. To work in the  
world of "dirty" math, you must escape the notation and figure out  
what the equation is all about, and find some way of representing  
THAT, which may well not involve simple numbers on the real-number  
line, or even on the complex number plane.



What does "dirty math" really mean?  There are engineering disciplines  
essentially *built* on solving equations with gross internal  
inconsistencies and unsolvable systems of differential equations. The  
modern world gets along pretty admirably suffering the very profitable  
and ubiquitous consequences of their quasi-solutions to those  
problems.  But it is still a lot of hairy notational math and  
equations, just applied in a different context that has function  
uncertainty as an assumption. The unsolvability does not lead them to  
pull numbers out of a hat, they have sound methods for brute-forcing  
fine approximations across a surprisingly wide range of situations.  
When the "clean" mathematical methods do not apply, there are other  
different (not "dirty") mathematical methods that you can use.


Indeed, I have sometimes said the only real education I ever got in AI  
was spending years studying an engineering discipline that is nothing  
but reducing very complex systems of pervasively polluted data and  
nonsense equations to precise predictive models where squeezing out an  
extra 1% accuracy meant huge profit.  None of it is directly  
applicable, the value was internalizing that kind of systems  
perspective and thinking about every complex systems problem in those  
terms, with a lot of experience algorithmically producing predictive  
models from them. It was different but it was still ordinary math,  
just math appropriate for the particular problem.  The only thing you  
could really say about it was that it produced a lot of great computer  
scientists and no mathematicians to speak of (an odd bias, that).



 With this as background, as I see it, hypercomputation is just  
another attempt to evade dealing with some hard mathematical problems.



The definition of "hypercomputation" captures some very specific  
mathematical concepts that are not captured in other conceptual  
terms.  I do not see what is being evaded, since it is more like  
pointing out the obvious with respect to certain limits implied by the  
conventional Turing model.


Cheers,

J. Andrew Rogers



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Hypercomputation and AGI

2008-12-30 Thread Steve Richfield
Ben,

I read your paper and have the following observations...

>From 1968-1970, I was the in-house numerical analysis and computer
consultant at the University of Washington departments of Physics and
Astronomy. At that time, Ira Karp, then the physics grad student who had
been a grad student longer than any other in the history of the Physics
department, was working on simulating the Schrodinger equation - the very
equation that some today think is uncomputable.

We had to devise methods to get past the numerical challenges, such as
computing some of the terms 3 different ways and taking the median value, to
deal with some horrendous numerical problems.

Ira got his PhD, and in so doing, pretty much settled the debate as
to whether such phenomena are "computable" using conventional computers.
Rest assured, they ARE computable.

Given the poor employment situation for physics PhDs, Ira went on to
get another PhD in Computer Science.

Ira now lives in the San Jose area. I'm sure that he could write a MUCH
better article about this.

On a side note, there is the "clean" math that people learn on their way to
a math PhD, and then there is the "dirty" math that governs physical
systems. Dirty math is fraught with all sorts of multi-valued functions,
fundamental uncertainties, etc. To work in the world of "dirty" math, you
must escape the notation and figure out what the equation is all about, and
find some way of representing THAT, which may well not involve simple
numbers on the real-number line, or even on the complex number plane.

With this as background, as I see it, hypercomputation is just another
attempt to evade dealing with some hard mathematical problems. My recent
postings about changing representations to make unsupervised learning work
orders of magnitude faster and better is just one illustration of the sorts
of new approaches that are probably needed to "break through" present
barriers. Hypercomputation is worse than a cop-out, because it distracts
people from tackling the hard problems.

In short, I see the present conception of hypercomputation mostly as a tool
for the mathematically challenged to show that it isn't THEIR fault that
they haven't solved the hard problems, when yes, it IS their fault.

All that having been said, sometimes it IS worthwhile stating things in
terms that aren't directly computable, e.g. complex differential equations.
Escaping the need for direct computability often leads to seeing things at a
higher level, which of course is where a numerical analysis person steps in
once something has been stated in an obscure form, to somehow coerce the
thing into a computable form. Just because something can be shown to be
"hard" or "not generally solvable" is no reason to give up until you can
somehow prove that it is absolutely impossible. Indeed, sometimes on the way
to such proofs, the "chinks" are found to solve them. Even where it IS
impossible, an adequate approximation can usually be found.

In summary, there ARE problems that could be classified as needing
hypercomputation, but hypercomputation itself is not as stated in your
paper. Everything that can be computed can be computed by conventional
computers of sufficient capability, as nothing in physics has yet been shown
to be uncomputable.

Steve Richfield
===
On 12/29/08, Ben Goertzel  wrote:

>
> Hi,
>
> I expanded a previous blog entry of mine on hypercomputation and AGI into a
> conference paper on the topic ... here is a rough draft, on which I'd
> appreciate commentary from anyone who's knowledgeable on the subject:
>
> http://goertzel.org/papers/CognitiveInformaticsHypercomputationPaper.pdf
>
> This is a theoretical rather than practical paper, although it does attempt
> to explore some of the practical implications as well -- e.g., in the
> hypothesis that intelligence does require hypercomputation, how might one go
> about creating AGI?   I come to a somewhat surprising conclusion, which is
> that -- even if intelligence fundamentally requires hypercomputation -- it
> could still be possible to create an AI via making Turing computer programs
> ... it just wouldn't be possible to do this in a manner guided entirely by
> science; one would need to use some other sort of guidance too, such as
> chance, imitation or intuition...
>
> -- Ben G
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> b...@goertzel.org
>
> "I intend to live forever, or die trying."
> -- Groucho Marx
>
>  --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member