(sometimes called heuristics), and it would still be
very useful to translate Mizar to CIC (perhaps the AGI could do the
translation...) but to have a being embodied at once in the physical
world and in the CIC world, wow! That would certainly prove something
;-)
Best regards,
Lukasz Stafiniak
On 4/22/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Well Matt, there's not only one hard problem!
NL understanding is hard, but theorem-proving is hard too, and
narrow-AI approaches have not succeeded at proving nontrivial theorems
except in very constrained domains...
Verification of
On 4/22/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Well Matt, there's not only one hard problem!
NL understanding is hard, but theorem-proving is hard too, and
narrow-AI approaches have not succeeded at proving nontrivial theorems
except in very constrained domains...
I happen to think
to dance = to create analogy across modalities = the key to intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936
On 4/23/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Ontic looks like an interesting and elegant formalism, but I don't see how it
would help an AGI learn mathematics. We are not yet at the point where we can
solve word problems like if I pay for a $4.95 item with a $10 bill, how much
change
On 4/23/07, John G. Rose [EMAIL PROTECTED] wrote:
Hi,
Adding some thoughts on AGI math - If the AGI or a sub processor of the AGI
is allotted time to sleep or idle process it could lazily postulate and
construct theorems with spare CPU cycles (cores are cheap nowadays), put
things together and
On 4/23/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
We really are pigs in space when it comes to discrete symbol manipulation such
as arithmetic or logic. It's actually harder (mentally) to do a
multiplication step such as 8*7=56 than to catch a Frisbee -- and I claim
I've learnt
On 4/28/07, Mike Tintner [EMAIL PROTECTED] wrote:
Disagree. The brain ALWAYS tries to make sense of language - convert it into
images and graphics. I see no area of language comprehension where this
doesn't apply.
I think that a *solution to NLP* is not a *solution to AGI*, so your
argument
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
I think that a *solution to NLP* is not a *solution to AGI*, so your
argument does not apply.
I think that this depends upon your definition of intelligence and also
assumes that a solution to NLP is not enough to boostrap the rest. I could
On 4/28/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
I disagree with this two ways. First, it's fairly well accepted among
mainstream AI researchers that full NL competence is AI-complete, i.e. that
human-level intelligence is a prerequisite for NL.
I don't think this is the operational
On 4/28/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
On 4/28/07, J. Storrs Hall, PhD. [EMAIL PROTECTED] wrote:
I disagree with this two ways. First, it's fairly well accepted among
I was writing in context of Mark Waser language-specific solutions (as
I understand them), which if wished could
On 4/28/07, Eugen Leitl [EMAIL PROTECTED] wrote:
On Sat, Apr 28, 2007 at 01:15:13PM -0400, J. Storrs Hall, PhD. wrote:
In case anyone is interested, some folks at IBM Almaden have run a
one-hemisphere mouse-brain simulation at the neuron level on a Blue Gene (in
What they did was running a
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
I don't think this is the operational sense of NLP as pursued by
applying linguistic theories in narrow AI setting. (e.g. Dynamic
Syntax, DRT, HPSG, ...)
but we want to apply NLP generally (i.e. not just in a narrow AI setting)
(For what
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
No, I mean applying to other modality so to say, to some other kind
of problem solving, not to another language
Ah. And this is the basis for my repeated clarification about NLP requiring
general cognition of a specific level (or type).
On 4/28/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
So you mean, that NLP can/must understand the Whorfian, Barthesian,
philosophical broad language using the tools of computational
linguistics' narrow language?
Then NLP=AGI (holistic, non-modular view)
Not that structuralists, tracing back
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
You are right that NLP implies the processing of world-view, I just
remind that general world-view management should be outsourced to the
AGI core.
I agree. My mental separation is that the NLP module simply consists of
the parser and the
On 4/28/07, Mark Waser [EMAIL PROTECTED] wrote:
So, in the real context of an AGI, you make her responsible for
talking to you in this simplified language, which just pushes language
understanding under her carpet ;-)
(joking here)
ASSERT(Lukasz Stafiniak, Evil)
So . . . . if I get NLP
On 5/3/07, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Benjamin Goertzel [EMAIL PROTECTED] wrote:
The Speagram language framework allows programming in a natural language
like idiom
http://www.speagram.org/
IMO this is a fascinating and worthwhile experiment, but I'm not yet
convinced it
On 5/3/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Me neither. It gives you the means to define a grammar for English, but
conveniently leaves the hard part up to the user :-)
In next two months or so, we will change your impression without
changing the truth value of this
On 5/3/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
On 5/3/07, Matt Mahoney [EMAIL PROTECTED] wrote:
But how does Speagram resolve ambiguities like this one? ;-)
Generally, Speagram would live with both interpretations until one of
them fails or it gets a chance to ask the user.
(But more
specified by a concrete application of the system
(I would call this a symbolic approach).
On 5/8/07, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Iddo Lev has a more practical answer:
http://www.stanford.edu/~iddolev/pulc/current_work.html
Just looking
You are welcome. Indeed, I was tempted to keep it for myself ;-)
As for learning rules, I guess you know the work
http://citeseer.ist.psu.edu/605753.html or similar. In practical
contexts, it must be integrated with learning the semantical lexicon
(e.g., feature structures), and thus, the
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-273.pdf
http://citeseer.ist.psu.edu/302165.html
How far is it from hybrid agent architecture to integrative artificial
intelligence?
I don't necessarily mean artificial general intelligence.
Hybrid architecture seems to be pre-integrative: it
On 5/12/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Of course, agents may not need AGI to be autonomous -- it all depends on
what
they need to do, and how flexibly they need to react
What sorts of behaviors do you want your agents to be capable of?
In this thread, I don't go beyond an
On 5/8/07, Mark Waser [EMAIL PROTECTED] wrote:
You are welcome. Indeed, I was tempted to keep it for myself ;-)
Please. DON'T!:-)
OK, so here comes a very interesting recent course:
http://www.inf.ed.ac.uk/teaching/courses/spnlp/
(Other relevant topics I've found are Situation Theory,
On 5/13/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Saturday 12 May 2007 09:00:46 am Pei Wang wrote:
...My understanding is that ..., your world model is, in essence, a bunch
of if I
do this, I'll observe that, which is a summary of experience, or
interactions between the system and
Attention Mizar users:
As I think some people on this list are interested in Mizar,
http://www.ags.uni-sb.de/~cebrown/mizar-texmacs/mizar-texmacs-tutorial.html
Well, TeXmacs vs. Emacs is still an open problem for me. I am all for
WYSIWYM, it is essential to look at structural mathematical
On 5/22/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
more collective wisdom. But at the level of principles, TeXmacs is
[wisdom == knowledge]
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member
On 5/22/07, Chuck Esterbrook [EMAIL PROTECTED] wrote:
Any opinions on Operator Grammar vs. Link Grammar?
http://en.wikipedia.org/wiki/Operator_Grammar
If you are intrested in Operator Grammar, perhaps you would also want
to take a look at Grammatical Framework:
On 5/22/07, Matt Mahoney [EMAIL PROTECTED] wrote:
Link grammar has a website and online demo at
http://www.link.cs.cmu.edu/link/submit-sentence-4.html
But as I posted earlier, it gives the same parse for:
- I ate pizza with pepperoni.
- I ate pizza with a friend.
- I ate pizza with a fork.
On 5/23/07, Mark Waser [EMAIL PROTECTED] wrote:
systems in that there has been success in processing huge amounts (corpuses,
corpi? :-) of data and producing results -- but it's *clearly* not the way
corpora
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or
For those of you interested in type-driven program synthesis:
http://www.cs.washington.edu/homes/blerner/seminal.html
(quick link:
http://www.cs.washington.edu/homes/blerner/files/seminal-visitdays.ppt)
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
P. S. (I couldn't resist this one, sorry :-)
The textbooks on logic are like novels of Dostoyevsky: they develop
slowly and meticulously, but they culminate on high (with Goedel's
theorems).
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options,
On 5/31/07, James Ratcliff [EMAIL PROTECTED] wrote:
The actual algorithm in this case is less intelligent, but the AI is more
intelligent because it has 2 algorithms to use, and knows enough to choose
in between them.
This is similar to the sorting problem... depending on how large a list
On 6/1/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
I'm starting a wiki, any suggestion of which software to use? Any
particular feature you'd like to see in it?
My partner is using MediaWiki currently.
You can consider PmWiki, for which we have an Emacs mode.
-
This list is
On 5/17/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
On Wednesday 16 May 2007 04:47:53 pm Mike Tintner wrote:
Josh
. If you'd read the archives,
you'd see that I've advocated constructive solid geometry in Hilbert
spaces
as the basic representational primitive.
Would you like to
In the name of Church-Turing-von Neumann, don't follow that heresy.
Quantum computers are kind-of Heglian synthesis of analog-digital.
There are quirks going on inside computers, like error correction on
memory retrieval, not to mess up with your (or the computer user)
symbols.
If you read
On 6/2/07, Mark Waser [EMAIL PROTECTED] wrote:
By some measures Google is more intelligent than any human. Should it
have
human rights? If not, then what measure should be used as criteria?
Google is not conscious. It does not need rights. Sufficiently complex
consciousness (or even
On 6/2/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Google has its rights. No crazy totalitarian government tells Google what to do.
(perhaps it should go: Google struggles for its rights, sometimes
making moral compromises)
-
This list is sponsored by AGIRI: http://www.agiri.org/email
On 6/2/07, Derek Zahn [EMAIL PROTECTED] wrote:
For a for-profit AGI project I suggest the following definition of
intelligence:
The ability to create information-based objects of economic value.
What about:
The ability to create information-based objects generating income.
This is less
On 6/2/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
http://www.singinst.org/research/summary
[see menu to the left] embodying some research I think will be extremely
valuable for pushing forward toward AGI, and that I think is well-
pursued in an open, nonprofit context.
* Research Area
(Perhaps anyone interested already knows this homepage, but anyway...
I've just found it.)
http://kti.ms.mff.cuni.cz/~urban/
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
One more bite:
Locus Solum: From the rules of logic to the logic of rules by
Jean-Yves Girard, 2000.
http://lambda-the-ultimate.org/node/1994
On 6/5/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
Speaking of logical approaches to AGI... :-)
http://www.thinkartlab.com/pkl/
-
This list
On 6/2/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
And many scientists refer to potential energy surfaces and the like. There's a
core of enormous representational capability with quite a few well-developed
intellectual tools.
Another Grand Unification theory: Estimation of Distribution
On 6/6/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
I believe:
The practice of representing knowlege using high-dimensional
numerical vectors ;-)
I've misspelled from vectorialism, see:
Churchland on connectionism
http://www-cse.ucsd.edu/users/gary/pubs/laakso-church-chap.pdf
(vectorialism
On 6/6/07, Derek Zahn [EMAIL PROTECTED] wrote:
D. There are no consortiums to join.
I see talk about joining Novamente, but are they hiring? It might be
possible to volunteer to work on peripheral things like AGISIM, but I sort
of doubt that Ben is eager to train volunteers on the AGI-type
On 6/7/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
Sure, but the nature of AGI is that wizzy demos are likely to come fairly
late in the development process. All of us actively working in the field
understand this
What about LIDA? Even if she is not very general she is more
cognitive
Which books would you recommend? For which there is a better
replacement? My results of quick amazon.com browsing:
Body Language: Representation in Action (Bradford Books) (Hardcover)
by Mark Rowlands (Author)
OK,
(1) Which book on pattern recognition is the most AGIsh? (Vapnik comes
in his own right)
((2) - (3) as before),
(4) When will Probabilistic Logic Networks be out?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
It's a far better answer than I asked for :-)
On 6/6/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
Norm of our vectors, known of old--
Lord of our far-flung number line
Beneath whose measured length we hold
Dominion over quad and spline--
Memory trace, be with us yet,
Lest we forget -
On 6/8/07, Mark Waser [EMAIL PROTECTED] wrote:
You are never going to see a painting by committee that is a great
painting.
And he's right. This was Sterling's indictment of Wikipedia–and to the
wisdom of crowds fad sweeping the Web 2.0 pitch sessions of Silicon
Valley–but it's also a fair
On 6/8/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
This is basically right. There are plenty of innovative Open Source programs
out there, but they are typically some academic's thesis work. Being Open
Source can allow them to be turned into solid usable applications, but it
can't create
I've ended up with the following list. What do you think?
* Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity
and Its Applications, Springer Verlag 1997
* Marcus Hutter, Universal Artificial Intelligence: Sequential
Decisions Based On Algorithmic Probability, Springer Verlag
On 6/9/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
I've ended up with the following list. What do you think?
I would like to add Locus Solum by Girard to this list, and then is
seems to collapse into a black hole... Don't care?
* Ming Li and Paul Vitanyi, An Introduction to Kolmogorov
On 6/9/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
I'm not aware of any book on pattern recognition with a view on AGI, except
The Pattern Recognition Basis of Artificial Intelligence by Don Tveter
(1998):
http://www.dontveter.com/basisofai/basisofai.html
You may look at The Cambridge
On 6/6/07, Peter Voss [EMAIL PROTECTED] wrote:
'fraid not. Have to look after our investors' interests… (and, like Ben, I'm
not keen for AGI technology to be generally available)
But at least Novamente makes a convinceable amount of their ideas
available IMHO.
P.S. Probabilistic Logic
On 6/12/07, Mark Waser [EMAIL PROTECTED] wrote:
a question is whether a software program could tractably learn language
without such associations, by relying solely on statistical associations
within texts.
Isn't there an alternative (or middle ground) of starting the software
program with a
On 6/12/07, Derek Zahn [EMAIL PROTECTED] wrote:
Some people, especially those espousing a modular software-engineering type
of approach seem to think that a perceptual system basically should spit out
a token for chair when it sees a chair, and then a reasoning system can
take over to reason
On 6/13/07, Matt Mahoney [EMAIL PROTECTED] wrote:
If yes, then how do you define pain in a machine?
A pain in a machine is the state in the machine that a person
empathizing with the machine would avoid putting the machine into,
other things being equal (that is, when there is no higher goal
On 6/13/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
On 6/13/07, Matt Mahoney [EMAIL PROTECTED] wrote:
If yes, then how do you define pain in a machine?
A pain in a machine is the state in the machine that a person
empathizing with the machine would avoid putting the machine into,
other
On 6/14/07, Matt Mahoney [EMAIL PROTECTED] wrote:
I would avoid deleting all the files on my hard disk, but it has nothing to do
with pain or empathy.
Let us separate the questions of pain and ethics. There are two independent
questions.
1. What mental or computational states correspond to
On 6/14/07, Matt Mahoney [EMAIL PROTECTED] wrote:
I don't believe this addresses the issue of machine pain. Ethics is a complex
function which evolves to increase the reproductive success of a society, for
example, by banning sexual practices that don't lead to reproduction. Ethics
also
Hello,
Have you worked on or thought about autonomous training? An AGI,
before engaging into a critical mission, has to prepare herself for
that, so she has to learn and simulate the domain of the mission.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or
On 6/22/07, Pei Wang [EMAIL PROTECTED] wrote:
Hi,
I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm , including an AGI
Overview followed by Representative AGI Projects.
Thanks! As a first note, SAIL seems to me a better replacement for
Cog, because SAIL has
Looking through Wikipedia articles I stumbled upon a probably very
interesting place:
http://www.auai.org/
Association for Uncertainty in Artificial Intelligence
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Obligatory reading:
http://www.cs.ualberta.ca/~sutton/book/ebook/the-book.html
Cheers.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
On 6/23/07, Bo Morgan [EMAIL PROTECTED] wrote:
Reinforcement learning is a simple theory that only solves problems for
which we can design value functions.
But it is good for AGI newbies like me to start with :-)
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To
On 6/22/07, Pei Wang [EMAIL PROTECTED] wrote:
I put a brief introduction to AGI at
http://nars.wang.googlepages.com/AGI-Intro.htm , including an AGI
Overview followed by Representative AGI Projects.
I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the
On 6/23/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
I think that hybrid and integrated descriptions are useful,
especially when seeing AGI in the broader context of agent systems,
but they need to be further elaborated (I posted about
TouringMachines hoping to bring that up). For me, now
I'm starting to learn about Numenta's HTM, but perhaps someone would
like to share in advance:
what are the essential differences between HTM and Yuang Weng's IHDR
augmented with Observation-driven MDPs?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
Ouch, they differ more than I thought... Good :-)
(HTM based more on Bayes nets)
On 6/24/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
I'm starting to learn about Numenta's HTM, but perhaps someone would
like to share in advance:
what are the essential differences between HTM and Yuang Weng's
On 6/24/07, Bob Mottram [EMAIL PROTECTED] wrote:
I have one of Richard Sutton's books, and RL methods are useful but I
also have some reservations about them. Often in this sort of
approach a strict behaviorist position is adopted where the system is
simply trying to find an appropriate
The obvious observation is that HTM is bottom-up and IHDR is top-down.
HTM builds hierarchy by merging fixed, topologically-organized,
coordinate-system-based subspaces: tilings, where IHDR builds
hierarchy by splitting input space by adaptively learned Gaussian
features.
-
This list is
On 6/27/07, Vladimir Nesov [EMAIL PROTECTED] wrote:
I think AI books are not particularly helpful, not at first (if you know
enough about algorithms, programming and math, generally).
AI provides technical answers to well-formulated
questions, but with AGI right questions is what's lacking.
On 6/29/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
I've talked to John Weng many times before, and I found that his AGI has
some problems but he wasn't very eager to talk about them. For example, it
could only recognize pre-trained objects (eg, a certain doll) but not
general object
On 6/29/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
It seems intuitive that bottom-up approach is better at
generalization. HTM is much more sophisticated, conditional
probabilities, and the learning in context of sequences, must really
be helpful. (IHDR can have time-chunking
BTW, has HTM been seriously tried at medical images understanding?
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e
Ute Schmid publications:
http://www.cogsys.wiai.uni-bamberg.de/schmid/publications.html
About this book
Because of its promise to support human programmers in developing
correct and efficient program code and in reasoning about programs,
automatic program synthesis has attracted the attention
Major premise and minor premise in a syllogism are not
interchangeable. Read the derivation of truth tables for abduction and
induction from the semantics of NAL to learn that different ordering
of premises results in different truth values. Thus while both
orderings are applicable, one will
Hi,
Has anyone done in-depth (i.e. experimental or theoretical) comparison
of accuracy-based LCSs (XCS) and Eric Baum's economy? Eric only
mentions superiority over ZCS. But XCS are closer to Eric's systems,
fitness of rules is based on their prediction of reward (compare to
making bids). I
When looking at it through a crisp glass, the relation is a
preorder, not a (partial) order. And priming is essential. For
example, in certain contexts, we think that an animal is a human
(anthropomorphism).
On 10/9/07, Mark Waser [EMAIL PROTECTED] wrote:
Ack! Let me rephrase. Despite the
What you describe is not a visualization, but the silent inner speech.
http://en.wikipedia.org/wiki/Lev_Vygotsky#Thinking_and_Speaking
On 10/12/07, a [EMAIL PROTECTED] wrote:
Benjamin Goertzel wrote:
Well, it's hard to put into words what I do in my head when I do
mathematics... it
For those interested in higher dimensions, I've just grabbed a link
from wikipedia:
* Christopher E. Heil, A basis theory primer, 1997.
http://www.math.gatech.edu/~heil/papers/bases.pdf
Well, a mathematician needs to _understand_ (as opposed to what I
would call a knowledge base - inference
On 10/12/07, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
some of us are much impressed by it. Anyone with even a surface grasp
of the basic concept on a math level will realize that there's no
difference between self-modifying and writing an outside copy of
yourself, but *either one*
On 11/8/07, Edward W. Porter [EMAIL PROTECTED] wrote:
HOW VALUABLE IS SOLMONONOFF INDUCTION FOR REAL WORLD AGI?
I will use the opportunity to advertise my equation extraction of
the Marcus Hutter UAI book.
And there is a section at the end about Juergen Schmidhuber's ideas,
from the older
On Nov 9, 2007 5:26 AM, Edward W. Porter [EMAIL PROTECTED] wrote:
ED ## what is the value or advantage of conditional complexities
relative to conditional probabilities?
Kolmogorov complexity is universal. For probabilities, you need to
specify the probability space and initial distribution
On Nov 9, 2007 5:26 AM, Edward W. Porter [EMAIL PROTECTED] wrote:
So are the programs just used for computing Kolmogorov complexity or are
they also used for generating and matching patterns.
The programs do not compute K complexity, they (their length) _are_ (a
variant of) Kolmogorov
On Nov 10, 2007 4:47 PM, Tim Freeman [EMAIL PROTECTED] wrote:
From: Lukasz Stafiniak [EMAIL PROTECTED]
The programs are generally required to exactly match in AIXI (but not
in AIXItl I think).
I'm pretty sure AIXItl wants an exact match too. There isn't anything
there that lets
On Nov 10, 2007 11:42 PM, Edward W. Porter [EMAIL PROTECTED] wrote:
You say there is no magic in AIXI. Is it just make believe Let X be the
best way to solve problems. Use X, or does it say something of value to
those like me who want to see real AGI's built?
Some observations that come to
On Nov 12, 2007 10:34 PM, Linas Vepstas [EMAIL PROTECTED] wrote:
I can easily imagine that next-years grand challenge, or the one
thereafter, will explicitly require ability to deal with cyclists,
motorcyclists, pedestrians, children and dogs. Exactly how they'd test
this, however, I don't
I think that there are two basic directions to better the Novamente
architecture:
the one Mark talks about
more integration of MOSES with PLN and RL theory
On 11/13/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Response to Mark Waser Mon 11/12/2007 2:42 PM post.
MARK Remember that the
which limited navigation.)
I think, that it would be a more fleshed-out knowledge representation
(but without limiting the representation-building flexibility of
Novamente).
-Original Message-
From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 14, 2007 9:15 AM
it reminds me that old joke about three kinds of mathematicians ;-)
On Nov 19, 2007 5:25 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
On Nov 18, 2007 11:24 PM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
There are a lot of worthwhile points in your post, and a number of things
I don't
.listbox.com
Sent: Monday, May 14, 2007 7:51 AM
Subject: Re: [agi] Tommy
On Saturday 12 May 2007 10:24:03 pm Lukasz Stafiniak wrote:
Do you have some interesting links about imitation? I've found these,
not all of them interesting, I'm just showing what I have:
Thanks -- some of those
Under this thread, I'd like to bring your attention to Serial
Experiments: Lain, an interesting pre-Matrix (1998) anime.
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
On Jan 15, 2008 10:49 PM, Jim Bromer [EMAIL PROTECTED] wrote:
At any rate, I should have a better idea if the idea will work or not by the
end of the year.
Lucky you, last time I proved P=NP it only lasted two days ;-)
Some resources for people caught by this off-topic thread:
- old year's
On Jan 29, 2008 12:35 AM, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Vladimir Nesov [EMAIL PROTECTED] wrote:
Exactly. That's why it can't hack provably correct programs.
Which is useless because you can't write provably correct programs that aren't
extremely simple. *All* nontrivial
On Feb 17, 2008 2:11 PM, Russell Wallace [EMAIL PROTECTED] wrote:
On Feb 17, 2008 9:56 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
I'm planning to collect commonsense knowledge into a large KB, in the form
of first-order logic, probably very close to CycL.
Before you embark on such a
On Feb 19, 2008 2:41 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
I think resolution theorem proving provides a way to answer yes/no queries
in a KB. I take it as a starting point, and try to think of ways to speed
it up and to expand its abilities (answering what/where/when/who/how
On Fri, Mar 28, 2008 at 9:29 PM, Robert Wensman
[EMAIL PROTECTED] wrote:
A few things come to my mind:
1. To what extent is learning and reasoning a sub topic of cognitive
architectures? Is learning and reasoning a plugin to a cognitive
architecture, or is in fact the whole cognitive
1 - 100 of 136 matches
Mail list logo