On 11/17/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Learning logic is similar to learning grammar. A statistical model can
classify words into syntactic categories by context, e.g. "the X is" tells
you that X is a noun, and that it can be used in novel contexts where other
nouns have been obse
YKY (Yan King Yin) <[EMAIL PROTECTED]>
wrote:
> Any suggestions on how to make my project more popular?
Clearly state the problem you want to solve. Don't just build AGI for the sake
of building it.
> Do you think it is good practice to attach frames to *words*, or rather to
> *situations*?
On 11/16/06, Russell Wallace <[EMAIL PROTECTED]> wrote:
On 11/16/06, Hank Conn <[EMAIL PROTECTED]> wrote:
> How fast could RSI plausibly happen? Is RSI inevitable / how soon will
> it be? How do we truly maximize the benefit to humanity?
>
The concept is unfortunately based on a category erro
On 11/16/06, James Ratcliff <[EMAIL PROTECTED]> wrote:
Correct,
Using inferences only works in toy, or small well understood domains, as
inevitably when it goes 2+ steps away from direct knowledge it will be
making large assumptions and be wrong.
My thoughts have been on an AISim as well, bu
On 11/16/06, Hank Conn <[EMAIL PROTECTED]> wrote:
How fast could RSI plausibly happen? Is RSI inevitable / how soon will it
be? How do we truly maximize the benefit to humanity?
The concept is unfortunately based on a category error: intelligence (in the
operational sense of ability to get th
I think this is a topic for the singularity list, but I agree it could happen
very quickly. Right now there is more than enough computing power on the
Internet to support superhuman AGI. One possibility is that it could take the
form of a worm.
http://en.wikipedia.org/wiki/SQL_slammer_(comput
>> > I don't think the proofs depend on any special assumptions about
>> the > nature of learning.
>>
>> I beg to differ. IIRC the sense of "learning" they require is
>> induction over example sentences. They exclude the use of real
>> world knowledge, in spite of the fact that such knowledge (
My point is that humans make decisions based on millions of facts, and we
do this every second.
Not! Humans make decisions based upon a very small number of pieces of
knowledge (possibly compiled from large numbers of *very* redundant data).
Further, these facts are generally arranged somewha
Again, do not confuse the two compressions.
In paq8f (on which paq8hp5 is based) I use lossy pattern recognition (like you
describe, but at a lower level) to extract features to use as context for text
prediction. The lossless compression is used to evaluate the quality of the
prediction.
--
My point is that humans make decisions based on millions of facts, and we do
this every second. Every fact depends on other facts. The chain of reasoning
covers the entire knowledge base.
I said "millions", but we really don't know. This is an important number.
Historically we have tended t
Here are some of my attempts at explaining RSI...
(1)
As a given instance of intelligence, as defined as an algorithm of an agent
capable of achieving complex goals in complex environments, approaches the
theoretical limits of efficiency for this class of algorithms, intelligence
approaches infin
As Eric Baum noted, in his book "What Is Thought?" he did not in fact
define intelligence or understanding as compression, but rather made a
careful argument as to why he believes compression is an essential
aspect of intelligence and understanding. You really have not
addressed his argument in y
I consider the last question in each of your examples to be unreasonable
(though for very different reasons).
In the first case, "What do you see?" is a nonsensical and unnecessary
extension on a rational chain of logic. The visual subsystem, which is not
part of the AGI, has reported somethi
"Rings" and "Models" are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
IMO these analogies are not fair.
The mathematical notion of
>> I don't believe it is true that better compression implies higher
>> intelligence (by these definitions) for every possible agent, environment,
>> universal Turing machine and pair of guessed programs.
Which I take to agree with my point.
>> I also don't believe Hutter's paper proved it to
"Rings" and "Models" are appropriated terms, but the mathematicians
involved would never be so stupid as to confuse them with the real
things. Marcus Hutter and yourself are doing precisely that.
I rest my case.
Richard Loosemore
Please, let us avoid explicitly insulting one another, on this
Matt Mahoney wrote:
Richard Loosemore <[EMAIL PROTECTED]> wrote:
5) I have looked at your paper and my feelings are exactly the same as
Mark's theorems developed on erroneous assumptions are worthless.
Which assumptions are erroneous?
Marcus Hutter's work is about abstract idealizations
Mark Waser <[EMAIL PROTECTED]> wrote:
>Give me a counter-example of knowledge that can't be isolated.
Q. Why did you turn left here?
A. Because I need gas.
Q. Why do you need gas?
A. Because the tank is almost empty.
Q. How do you know?
A. Because the needle is on "E".
Q. How do you know?
A. Becau
In the context of AIXI, intelligence is measured by an accumulated reward
signal, and compression is defined by the size of a program (with respect to
some fixed universal Turing machine) guessed by the agent that is consistent
with the observed interaction with the environment. I don't believe
Sure they can, there has to be a finite amount of this information received and
passed, it may be large but still finite.
So if a human moves his arm because you told them to, you can measure and look
at the arm and say:
Why did his arm move?
The scientist could look down at the instruments a
data is single point instances of occurrence
information compiles data and some logic
knowledge adds expectations and more logic
The boundaries between each is *very* fuzzy but the general hierarchy is common
consensus
- Original Message -
From: James Ratcliff
To: agi@v2.listbox.c
The main first subtitle:
Compression is Equivalent to General IntelligenceUnless your definition of
"Compression" is not the simple large amount of text turning into the small
amount of text.
And likewise with General Intelligence.
I dont think under any of the many many definitions I have seen
Isn't this pointless? I mean, if I offer any proof you will just attack
the assumptions. Without assumptions, you can't even prove the universe
exists.
Just come up with decent assumptions that I'm willing to believe are likely.
I'm not attacking your assumptions just to be argumentative, I'
> However, it has not yet been as convincingly disproven as the Cyc-type
> approach of feeding a AI commonsense knowledge encoded in a formal
> language ;-)
Actually, I would describe the Cyc-type approach as feeding an AI common-sense
data which then begs all sorts of questions . . . .
- O
1. The fact that AIXI is intractable is not relevant to the proof that
compression = intelligence, any more than the fact that AIXI is not computable.
In fact it is supporting because it says that both are hard problems, in
agreement with observation.
Wrong. Compression may (and, I might even
whats your definition of diff of data and knowledge then?
Cyc uses a formal language based in logic to describe the things.
James
Mark Waser <[EMAIL PROTECTED]> wrote: > However, it has not yet been as
convincingly disproven as the Cyc-type
> approach of feeding a AI commonsense knowledg
I concur, there are just too many things wrong with these statements.
If your AI cant tell you on any level why its doing something, and you cant
tell it not to do it, or do it in a different way, then you have a Programmed
Machine, not an AI.
ALL programs are modified via changing the input to
Mark Waser <[EMAIL PROTECTED]>
wrote:
>
So *prove* to me why information theory forbids transparency of a knowledge
base.
Isn't this pointless? I mean, if I offer any proof you will just attack the
assumptions. Without assumptions, you can't even prove the universe exists.
I have already s
Furthermore we learned in class recently about a case where a person was
literally born with only half a brain, dont have that story but here is one:
http://abcnews.go.com/2020/Health/story?id=1951748&page=1
I think all the talk about hard numbers is really off base unfortunatly and AI
shouldnt
Eric Baum wrote:
Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:
I don't know what you mean by incrementally updateable, > but if
you look up the literature on language learning, you will find >
that learning vari
Correct,
Using inferences only works in toy, or small well understood domains, as
inevitably when it goes 2+ steps away from direct knowledge it will be making
large assumptions and be wrong.
My thoughts have been on an AISim as well, but I am laying out the works for it
to be massivley avai
Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:
>> > I don't know what you mean by incrementally updateable, > but if
>> you look up the literature on language learning, you will find >
>> that learning various sorts
> The knowledge base has high complexity. You can't debug it. You can examine
> it and edit it but you can't verify its correctness.
While the knowledge base is complex, I disagree with the way in which you're
attempting to use the first sentence. The knowledge base *isn't* so complex
that i
33 matches
Mail list logo