On 6/11/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
I'll try to answer this and Mike Tintner's question at the same time. The
typical GOFAI engine over the past decades has had a layer structure
something like this:
Problem-specific assertions
Inference engine/database
Lisp
on top of the
Yes, what language to use when expressing memes is definitley one of the key
point in the construction of such a system. I think such a language needs to
fulfull the following criteria:
- Enough expressive power
- Algorithms for consistency checks, entailment etc.
- Roubustness to random
For an intelligence to know what is possible actions it must model those
and think about those internally, and model that beahavior, and I would
argue that all intelligence is about the behavior that the internal
cognition brings.
You cant really have an intelligence I dont believe without
Well, if one of us becomes extremely successful biz-wise, but the other has
made some deep AI success, the one can always buy the other's company ;-)
Hey! If I become both extremely successful biz-wise *and* make some deep AI
success, can I give you the company and just make you pay me some
Certainly there are many ways to slay the beast. And the beast has many
definitions. For an open source AGI you'd have to not throw in the kitchen
sink, come up was a very basic design and maybe not tout how the thing is
going to trigger a singularity? Maybe not try to replicate human brain
Robert Wensman writes:
Has there been any work done previously in statistical, example driven
deduction?
Yes. In this AGI community, Pei Wang's NARS system is exactly that:
http://nars.wang.googlepages.com/
Also, Ben Goertzel (et. al.) is building a system called Novamente
--- Lukasz Stafiniak [EMAIL PROTECTED] wrote:
http://www.goertzel.org/books/spirit/uni3.htm -- VIRTUAL ETHICS
The book chapter describes the need for ethics and cooperation in virtual
worlds, but does not address the question of whether machines can feel pain.
If you feel pain, you will insist
Eric,
Right. IMO roughly the same problem when processed by a
computer..
Why should you expect running a pain program on a computer to make you
feel pain any more than when I feel pain?
I don't. The thought was: If we don't feel pain when processing
software in our pain-enabled minds, why
Jiri Eric,
Right. IMO roughly the same problem when processed by a
computer..
Why should you expect running a pain program on a computer to make
you feel pain any more than when I feel pain?
Jiri I don't. The thought was: If we don't feel pain when processing
Jiri software in our
Robert Wensman [EMAIL PROTECTED] wrote:For an intelligence to know what
is possible actions it must model those and think about those internally, and
model that beahavior, and I would argue that all intelligence is about the
behavior that the internal cognition brings.
You cant really
10 matches
Mail list logo