I recently found this paper to contain some thinking worthwhile to the
considerations in this thread.
http://lcsd05.cs.tamu.edu/papers/veldhuizen.pdf
- Jef
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
Is there any research that can tell us what kind of structures are better
for machine learning? Or perhaps w.r.t a certain type of data? Are there
learning structures that will somehow learn things faster?
There is plenty of knowledge about which learning algorithms are better for
which
My impression is that most machine learning theories assume a search space
of hypotheses as a given, so it is out of their scope to compare *between*
learning structures (eg, between logic and neural networks).
Algorithmic learning theory - I don't know much about it - may be useful
because it
Thanks for the input.
There's one perplexing theorem, in the paper about the algorithmic
complexity of programming, that the language doesn't matter that much, ie,
the algorithmic complexity of a program in different languages only differ
by a constant. I've heard something similar about the
On 08/11/2007, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
My impression is that most machine learning theories assume a search space
of hypotheses as a given, so it is out of their scope to compare *between*
learning structures (eg, between logic and neural networks).
Algorithmic learning
VLADIMIR NESOV IN HIS 11/07/07 10:54 PM POST SAID
VLADIMIR Hutter shows that prior can be selected rather arbitrarily
without giving up too much
ED Yes. I was wondering why the Solomonoff Induction paper made such
a big stink about picking the prior (and then came up which a choice that
Hi all,
Just a reminder that we are soliciting papers on
Sociocultural, Ethical and Futurological Implications of Artificial General
Intelligence
to be presented at a workshop following the AGI-08 conference in Memphis
(US) in March.
http://www.agi-08.org/workshop/
The submission deadline is
From: Jef Allbright [mailto:[EMAIL PROTECTED]
I recently found this paper to contain some thinking worthwhile to the
considerations in this thread.
http://lcsd05.cs.tamu.edu/papers/veldhuizen.pdf
This is an excellent paper not in only the subject of code reuse but also of
the techniques
Jef,
The paper cited below is more relevant to Kolmogorov complexity than
Solomonoff induction. I had thought about the use of subroutines before I
wrote my questioning critique of Solomonoff Induction.
Nothing in it seems to deal with the fact that the descriptive length of
realitys
On 08/11/2007, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
Thanks for the input.
There's one perplexing theorem, in the paper about the algorithmic
complexity of programming, that the language doesn't matter that much, ie,
the algorithmic complexity of a program in different languages only
Jiri Jelinek [mailto:[EMAIL PROTECTED] wrote,
All,
I don't want to trigger a long AGI definition talk, but can
one or two of you briefly tell me what might be wrong with
the definition I mentioned in the initial post: General
intelligence is the ability to gain knowledge in one context
and
On 08/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
I'm sorry I'm not going to be able to provide much illumination for
you at this time. Just the few sentences of yours quoted above, while
of a level of comprehension equal or better than average on this list,
demonstrate epistemological
I think the point that BillK was getting at in posting the collection of
definitions is that there is no one definition. Intelligence and general
intelligence is one of those things that is hard to define, but we
(probably) know it when we see it.
If you can't find a source for your
Derek,
Thank you.
I think the list should be a place where people can debate and criticize
ideas, but I think such poorly reasoned and insulting flames like Jefs are
not helpful, particularly if they are driving potentially valuable
contributors like you off the list.
Luckily such flames
Jeff,
In your below flame you spent much more energy conveying contempt than
knowledge. Since I dont have time to respond to all of your attacks, let
us, for example, just look at the last two:
MY PRIOR POST ...affect the event's probability...
JEFS PUT DOWN 1More coherently, you might
BillK,
thanks for the link.
All,
I don't want to trigger a long AGI definition talk, but can one or two
of you briefly tell me what might be wrong with the definition I
mentioned in the initial post:
General intelligence is the ability to gain knowledge in one context
and correctly apply it in
Edward,
For some reason, this list has become one of the most hostile and poisonous
discussion forums around. I admire your determined effort to hold substantive
conversations here, and hope you continue. Many of us have simply given up.
-
This list is sponsored by AGIRI:
Cool!
-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
Sent: Thursday, November 08, 2007 12:56 PM
To: agi@v2.listbox.com
Subject: Re: [agi] How valuable is Solmononoff Induction for real world
AGI?
Yeah, we use Occam's razor heuristics in Novamente, and they are
Hi Linas,
Aside from Novamente and CYC, who else has attempted to staple
NLP to a reasoning engine? ...
I see the issues have been discussed thoroughly already,
but I did not see anyone actually answering your question.
Many people indeed have tried to staple NLP to a reasoner,
and even though
Some thoughts on current trends and their societal implications:
If autonomous vehicles become commonplace this obviously has economic
implications for all those industries which rely upon the
unreliability of human drivers, and also for those workers whose jobs
is oriented around driving.
On 11/8/07, Edward W. Porter [EMAIL PROTECTED] wrote:
Jeff,
In your below flame you spent much more energy conveying contempt than
knowledge.
I'll readily apologize again for the ineffectiveness of my
presentation, but I meant no contempt.
Since I don't have time to respond to all of your
On 11/8/07, Edward W. Porter [EMAIL PROTECTED] wrote:
In my attempt to respond quickly I did not intended to attack him or
his paper
Edward -
I never thought you were attacking me.
I certainly did attack some of your statements, but I never attacked you.
It's not my paper, just one that I
On 11/8/07, Edward W. Porter [EMAIL PROTECTED] wrote:
HOW VALUABLE IS SOLMONONOFF INDUCTION FOR REAL WORLD AGI?
I will use the opportunity to advertise my equation extraction of
the Marcus Hutter UAI book.
And there is a section at the end about Juergen Schmidhuber's ideas,
from the older
Jef,
ED (I have switched to make it even easier to
quickly see each change in speaker)
Thank you for your two posts seeking to clear up the misunderstanding
between us. I dont mind disagreements if they seek to convey meaningful
content, not just negativity. You post of
Lukasz Stafiniak wrote in part on Thu 11/8/2007 11:54 AM
LUKASZ ## I think the main point is: Bayesian reasoning is about
conditional distributions, and Solomonoff / Hutter's work is about
conditional complexities. (Although directly taking conditional Kolmogorov
complexity didn't work,
On Nov 5, 2007 7:01 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:
On Nov 4, 2007 12:40 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
How do you propose to measure intelligence in a proof of concept?
Hmmm, let me check my schedule...
Ok, I'll figure this out on Thursday night (unless I get hit by a
26 matches
Mail list logo