Hi,
> > I think that the hard problem of AGI is actually the other part:
> >
> > BUILDING A SYSTEM CAPABLE OF SUPPORTING SPECIALIZED INTELLIGENCES THAT
> > COMBINE NARROW-AI HEURISTICS WITH SMALL-N GENERAL INTELLIGENCE
>
> Yes this is part of the problem, the other thing you don't mention
> is
Pei Wang wrote:
Again, I'm not saying that NARS is not a TM in any sense, but that it is not
a TM at the questing-answering level. As I said in the paper, if you
consider the life-long history of input and output of the whole system, NARS
is a TM. Also, if you check each individual inference st
Ben,
We seem to be thinking along similar lines in most aspects here...
The way the human brain seems to work is:
* some of its architecture is oriented towards increasing the sum over n of
int(S,n,rS,rT), where rS is given by brain capacity (enhanced by tools like
writing?) and rT is differe
Shane wrote:
> Two systems are fundamentally equivalent if it's possible for them to
> simulate each other given any finite amount of resources. They are
> fundamentally different if this is impossible no matter how much
> resource is made available. Clearly this is a very deep and
> fundamenta
Hi Ben,
If two computational models can solve radically different problems *under
realistic space and time constraints*, then are they "fundamentally
different" or not??
You seem to want call two models "fundamentally the same" if they can solve
the same problems under infinite time and space c
Risto Miikkulainen [!! love that name !!] did some fun stuff having a
GA-evolved neural net learn to play Othello. it learned the rules of the
game, not just strategy (actually it learned them both simultaneously)...
But to use ID3 or GA/NN for this kind of problem, one uses a different tree
or
This has been tried for chess with very limited success.
Quinlin, a PhD student of Donald Michie in England,
now I believe a professor in Sydney Australia
developed the ID3 algorithm and tested it on learning chess.
On Sat, 11 Jan 2003 22:34:01 -0500 "Ben
Well, computational complexity theory and formal computer science DO have
practical applications, of course Loads of applications in compiler
design and in OS design too (in the theory underlying scheduling, load
balancing, etc.)
Algorithmic information theory doesn't have practical applicat
On Sun, Jan 12, 2003 at 03:25:08PM -0500, Ben Goertzel wrote:
> It is clear that traditional formal computing theory, though in principle
> *applicable* to AGI programs, is pretty much useless to the AGI theorist...
Is this anything unique to AGI? Does computing theory have much relevance for
Li
> And just to be clear, its obviously a Turing machine since it
> runs on Turing
> machines and you can run any Turing machine on it. It is just very
> different from normal conceptions of universal computers. But
> then, I could
> say the exact same thing about the human brain.
>
> Cheers,
>
>
On 1/12/03 9:43 AM, "Damien Sullivan" <[EMAIL PROTECTED]> wrote:
>
> I'll take the risk of replying to other messages here without reading the 2
> dozen other replies first. It sounds like James is using primitive recursive
> functions, equivalent to Hofstadter's BLoop in _GEB_. Those are a subs
On Sun, Jan 12, 2003 at 11:05:15AM -0500, Pei Wang wrote:
> another related topic: the final state. In my paper I said that my system is
> not a TM, also because it doesn't have a set of predetermined final states
A system using S and K combinators isn't a TM at all; totally different
mechanism o
On Sun, Jan 12, 2003 at 10:37:13AM -0500, Pei Wang wrote:
> See my replies to Ben. As soon as the final answer (not intermidiate
> answer) depends on internal state, we are not talking about the same Turing
> Machine anymore. Of course you can build a thoery in this way, but it is
> already not
On Sun, Jan 12, 2003 at 09:38:26AM -0500, Ben Goertzel wrote:
> To me, the question of what a computational model can do with moderately
> small space and time resource constraints is at least equally "fundamental"
Computability theory: can this be computed?
Complexibility theory: does it take po
> > I'm sorry but I still don't understand exactly what you mean by "Is
> computer
> > program-instance X a TM with respect to problem P"
>
> Each TM (or algorithm) is defined with respect to a "problem",
> which is set
> of valid input strings. Each string in the set is a problem instance, ans
>
NARS does have a finite set of possible final states, though, because it's
implemented on a finite-state machine...
And, given a NARS codebase and its state in RAM & on disk at a given point
in time, you CAN predict its behavior from its inputs alone...
ben
> -Original Message-
> From:
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Sunday, January 12, 2003 10:50 AM
Subject: RE: [agi] AI and computation (was: The Next Wave)
> > Once again, the interesting question is not "Is NARS a TM?", but
> > "Is NARS a
> > TM with respect
James,
I basically agree with your points.
In my replies to Shane and Ben, I argued that the TM definition is too
limited by assuming a unique initial state. Now your posting addressed
another related topic: the final state. In my paper I said that my system is
not a TM, also because it doesn't
> Once again, the interesting question is not "Is NARS a TM?", but
> "Is NARS a
> TM with respect to problem P?" If the problem is "To answer
> Ben's email on
> `AI and compuation'", then the system is not a TM (though it may
> be a TM in
> many other senses). For this reason, to discuss the comp
- Original Message -
From: "Shane Legg" <[EMAIL PROTECTED]>
> I think my reply to this is similar in nature to Ben's (however I think
> Ben's example of a push down automaton is a bit misleading as it's
> really the abilty to carry state between system cycles not during a
> cycle of that
Cosmodelia
wrote:
*
b) more
importantly, I think that making the program musically interact with humans,
would be a big help in helping the system *emotionally* understand humans ...
which is important for the system's long-term Friendliness toward humans
IMHO
this is the
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
> So let's look at NARS over the time-interval [s,t] corresponding to the
> answering of an individual question...
>
> Over this time-interval
>
> "The NARS program plus its internal state at the time point s"
>
> is still mode
Pei Wang wrote:
> Again, I'm not saying that NARS is not a TM in any sense, but
> that it is not
> a TM at the questing-answering level. As I said in the paper, if you
> consider the life-long history of input and output of the whole
> system, NARS
> is a TM. Also, if you check each individual
Well,
there is no doubt that hardwiring can have a significant effect on a system's
propensity to take various kinds of actions
However, the human urge to reproduce is a case in point of the ability of
a complex self-organizing AI system to act against its innate
propensities.
We
Music and AI's Friendliness
Hi Ben:
The project you describe -- AGI applied to music -- is one very dear to my
heart...
It is great to know a person working in one of the few projects of AGI loves
music, and I guess art.
It is something I miss in the transhumanist community at large.
It is
Shane wrote (responding to James)
> > We implement a virtual machine on top of a standard
> > computer architecture that is designed around a fundamentally different
> > model of a universal computer.
>
> I doubt that your model is really all that fundamentally different.
> Either your model isn't
James Rogers wrote:
> Obviously this is something of an oversimplification; I think I see your
> point. There is still a halting problem, just not a practical halting
> problem for most intents and purposes. Or at least it has been
> pushed to a
> level of abstraction where we don't really wor
Phil Sutton wrote:
***
I think if an ethical goal is general and
highly important then we should make sure we find ways to hard wire it - ie we
shouldn't launch AGIs into the world until we have worked out how to hardwire
the really critical ethical goals/contraints.
...
I
think i
Shane Legg wrote:
> Why consider just one test problem?
>
> Doing so you will always be in danger of having a system that
> isn't truly general and is built for one specific kind of a task.
>
> A better idea, I think, would be to test the system on *all* problems
> that can be described in n bits
Phillip
Sutton said:
>I think it is possible to lodge very abstract concepts into an
entity, and use hard wiring to assist the AGI to rapidly and easily recognise examples of the 'hard' abstract concepts -
thus giving some life to each abstract concept.
Another
class of test for th
I've just read Ben's article
Thoughts on AI Morality, May, 2002
www.goertzel.org/dynapsyc/2002/AIMorality.htm
First a summary and then my comments.
Ben argues that AGIs desirably should
have a set of ethics that among
other things motivate their compassionate treatment of life including
h
Shane Legg wrote:
>A better idea, I think, would be to test the system on *all* problems
>that can be described in n bits or less (or use a large random sample
>from this space). Then your system is gauranteed to be completely
>general in a computational sense.
Sounds good to me. Perhaps my mot
Hi James,
Interesting, as I make a similar argument. I'm using an unconventional
model of universal computation on finite state machinery that, among other
and from your other email,
There is still a halting problem, just not a practical halting
problem for most intents and purposes.
As t
33 matches
Mail list logo