On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield
steve.richfi...@gmail.comwrote:
Rob,
I just LOVE opaque postings, because they identify people who see things
differently than I do. I'm not sure what you are saying here, so I'll make
some random responses to exhibit my ignorance and elicit
Sorry, the link I included was invalid, this is what I meant:
http://www.geog.ucsb.edu/~raubal/Publications/RefConferences/ICSC_2009_AdamsRaubal_Camera-FINAL.pdf
On Tue, Jun 29, 2010 at 2:28 AM, rob levy r.p.l...@gmail.com wrote:
On Mon, Jun 28, 2010 at 5:23 PM, Steve Richfield
If anyone has any knowledge of or references to the state of the art in
explanation-based reasoning, can you send me keywords or links? I've read
some through google, but I'm not really satisfied with anything I've found.
Thanks,
Dave
On Sun, Jun 27, 2010 at 1:31 AM, David Jones
Jim,
The importance of the point here is NOT primarily about AGI systems having to
make this distinction. Yes, a real AGI robot will probably have to make this
distinction as an infant does - but in terms of practicality, that's an awful
long way away.
The importance is this: real AGI is
David Jones wrote:
If anyone has any knowledge of or references to the state of the art in
explanation-based reasoning, can you send me keywords or links?
The simplest explanation of the past is the best predictor of the future.
http://en.wikipedia.org/wiki/Occam's_razor
Thanks Matt,
Right. But Occam's Razor is not complete. It says simpler is better, but 1)
this only applies when two hypotheses have the same explanatory power and 2)
what defines simpler?
So, maybe what I want to know from the state of the art in research is:
1) how precisely do other people
Right. But Occam's Razor is not complete. It says simpler is better, but 1)
this only applies when two hypotheses have the same explanatory power and 2)
what defines simpler?
A hypothesis is a program that outputs the observed data. It explains the
data if its output matches what is
Just off the cuff here - isn't the same true for vision? You can't learn vision
from vision. Just as all NLP has no connection with the real world, and totally
relies on the human programmer's knowledge of that world.
Your visual program actually relies totally on your visual vocabulary - not
Mike,
THIS is the flawed reasoning that causes people to ignore vision as the
right way to create AGI. And I've finally come up with a great way to show
you how wrong this reasoning is.
I'll give you an extremely obvious argument that proves that vision requires
much less knowledge to interpret
David Jones wrote:
I wish people understood this better.
For example, animals can be intelligent even though they lack language because
they can see. True, but an AGI with language skills is more useful than one
without.
And yes, I realize that language, vision, motor skills, hearing, and all
The point I was trying to make is that an approach that tries to interpret
language just using language itself and without sufficient information or
the means to realistically acquire that information, *should* fail.
On the other hand, an approach that tries to interpret vision with minimal
David Jones wrote:
I really don't think this is the right way to calculate simplicity.
I will give you an example, because examples are more convincing than proofs.
Suppose you perform a sequence of experiments whose outcome can either be 0 or
1. In the first 10 trials you observe 00.
Experiments in text compression show that text alone is sufficient for learning
to predict text.
I realize that for a machine to pass the Turing test, it needs a visual model
of the world. Otherwise it would have a hard time with questions like what
word in this ernai1 did I spell wrong?
the purpose of text is to convey something. It has to be interpreted. who
cares about predicting the next word if you can't interpret a single bit of
it.
On Tue, Jun 29, 2010 at 3:43 PM, David Jones davidher...@gmail.com wrote:
People do not predict the next words of text. We anticipate it, but
You're not getting where I'm coming from at all. I totally agree vision is far
prior to language. (We and I've covered your points many times). That's not the
point - wh. is that vision is nevertheless still vastly more complex, than you
have any idea.
For one thing, vision depends on
Such an example is no where near sufficient to accept the assertion that
program size is the right way to define simplicity of a hypothesis.
Here is a counter example. It requires a slightly more complex example
because all zeros doesn't leave any room for alternative hypotheses.
Here is the
On Tue, Jun 29, 2010 at 3:33 PM, Mike Tintner tint...@blueyonder.co.ukwrote:
You're not getting where I'm coming from at all. I totally agree vision
is far prior to language. (We and I've covered your points many times).
That's not the point - wh. is that vision is nevertheless still vastly
Answering questions is the same problem as predicting the answers. If you can
compute p(A|Q) where Q is the question (and previous context of the
conversation) and A is the answer, then you can also choose an answer A from
the same distribution. If p() correctly models human communication, then
The paper seems very similar in principle to LSA. What you need for a
concept vector (or position) is the application of LSA followed by K-Means
which will give you your concept clusters.
I would not knock Hutter too much. After all LSA reduces {primavera,
mamanthal, salsa, resorte} to one word
Scratch my statement about it being useless :) It's useful, but no where
near sufficient for AGI like understanding.
On Tue, Jun 29, 2010 at 4:58 PM, David Jones davidher...@gmail.com wrote:
notice how you said *context* of the conversation. The context is the real
world, and is completely
You can always find languages that favor either hypothesis. Suppose that you
want to predict the sequence 10, 21, 32, ? and we write our hypothesis as a
function that takes the trial number (0, 1, 2, 3...) and returns the outcome.
The sequence 10, 21, 32, 43, 54... would be coded:
int
David,
What Matt is trying to explain is all right, but I think a better way of
answering your question would be to invoke the mighty mysterious Bayes' Law.
I had an epiphany similar to yours (the one that started this thread) about
5 years ago now. At the time I did not know that it had all
22 matches
Mail list logo