I am pretty confident that the specialized indices we use (implemented
directly in C++) are significantly faster than implementing comparable
indices in an enterprise DB would be.
Wow. You've floored me given that indexes are key to what enterprise DBs do
well. What are the special requireme
y
integrate. If the second number isn't a lot larger than the first, you're not
living in my world.:-)
- Original Message -
From: Russell Wallace
To: agi@v2.listbox.com
Sent: Tuesday, February 20, 2007 6:02 PM
Subject: **SPAM** Re: [agi] Development Environments
e -
From: Russell Wallace
To: agi@v2.listbox.com
Sent: Tuesday, February 20, 2007 5:23 PM
Subject: **SPAM** Re: **SPAM** Re: [agi] Development Environments for AI (a
few non-religious comments!)
On 2/20/07, Mark Waser <[EMAIL PROTECTED]> wrote:
I think that you grossly un
into the database itself and operate on it there.
- Original Message -
From: Russell Wallace
To: agi@v2.listbox.com
Sent: Tuesday, February 20, 2007 3:31 PM
Subject: **SPAM** Re: [agi] Development Environments for AI (a few
non-religious comments!)
On 2/20/07, Mark Waser
My real point is that you don't really need a new dev env for this.
Richard is talking about some *substantial* architecture here -- not
just a development environment but a *lot* of core library routines (as you
later speculate) and functionality that is either currently spread across
man
Unluckily, after being involved in .Net for quite some time, I do not
share your optimism. In fact I came to think that .Net is not suitable
for anything that requires really high performance and parallelism.
Perhaps the problem is just that it is very very hard to build a really
good VM and proba
ssage -
From: "Samantha Atkins" <[EMAIL PROTECTED]>
To:
Sent: Sunday, February 18, 2007 10:22 PM
Subject: Re: Languages for AGI [WAS Re: [agi] Priors and indefinite
probabilities]
Mark Waser wrote:
And, from a practical programmatic way of having code generate code,
thos
One reason for picking a language more powerful than the run-of-the-mill
imperative ones (of which virtually all the ones mentioned so far are just
different flavors) is that the can give you access to different paradigms
that will enhance your view of how an AGI should work internally.
Very tru
k at your
subsequent email to Eliezer. Come on man. Lighten up a little.
Everyone else ... I apologize for taking your time to read this email.
I'm just hoping it'll make anyone from flaming people and calling them
stupid.
Enough said. I think we can all get along, and learn somet
of day. Didn't you learn anything from the experience?
- Original Message -
From: "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]>
To:
Sent: Sunday, February 18, 2007 12:51 PM
Subject: **SPAM** Re: Languages for AGI [WAS Re: [agi] Priors and indefinite
probabilities]
e ...". Its a general comment to not reinvent wheels. If the
wheel doesn't fit perfectly, you can build an "adapter" for it.
Bottom line ... Pei is correct. There will not be a consensus on what
the most
suitable language is for AI.
Regards,
~Aki
On 18-Feb-07, at 11:3
What is the best language for AI begs the question --> For which aspect of
AI?
And also --> What are the requirements of *this particular part* of your AI
and who is programming it.
Far and away, the best answer to the best language question is the .NET
framework. If you're using the framewo
If there's a market for this, then why can't I even buy a thermostat
with a timer on it to turn the temperature down at night and up in the
morning? The most basic home automation, which could have been built
cheaply 30 years ago, is still, if available at all, so rare that I've
never seen it.
mber 05, 2006 11:34 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser <[EMAIL PROTECTED]> wrote:
> Are
> you saying that the more excuses we can think up, the more intelligent
> we are? (Actually there might be something in that!).
S
From: James Ratcliff
To: agi@v2.listbox.com
Sent: Tuesday, December 05, 2006 11:17 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
BillK <[EMAIL PROTECTED]> wrote:
On 12/4/06, Mark Waser wrote:
>
> Explaining our actions is the re
rsation is "Called "The Emotion Machine," it
argues that, contrary to popular conception, emotions aren't distinct from
rational thought; rather, they are simply another way of thinking, one that
computers could perform."
----- Original Message -
From: "M
o be congruent with them (and even
more so in well-balanced and happy individuals).
- Original Message -
From: "BillK" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, December 05, 2006 7:03 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 12/4/06,
age -
From: "William Pearson" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 5:51 PM
Subject: [agi] Addiction was Re: Motivational Systems of an AI
On 04/12/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Why must you argue with everything I say? Is this not a s
To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.
The first sentence of the proposition was exact
list don't even agree on what it means much less what it's
implications are . . . .
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 2:03 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/
back to the original argument?
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 1:38 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it
and how fast?]
On 12/4/06, Mark Waser <[EM
You partition intelligence into
* explanatory, declarative reasoning
* reflexive pattern-matching (simplistic and statistical)
Whereas I think that most of what happens in cognition fits into
neither of these categories.
I think that most unconscious thinking is far more complex than
"reflexive
> Well, of course they can be explained by me -- but the acronym for
> that sort of explanation is "BS"
I take your point with important caveats (that you allude to). Yes, nearly all
decisions are made as reflexes or pattern-matchings on what is effectively
compiled knowledge; however, it is th
age -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 10:45 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Philip Goetz gave an example of an intrusion detection s
Whereas my view is that nearly all HUMAN decisions are based on so
many entangled variables that the human can't hold them in conscious
comprehension ;-)
We're reaching the point of agreeing to disagree except . . . .
Are you really saying that nearly all of your decisions can't be explained
(
Ben,
I agree with the vast majority of what I believe that you mean but . . .
1) Just because a system is "based on logic" (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans. As I noted in recent posts,
probabilistic logic sy
ntil we build AGI, we really
won't know. I realize I am repeating (summarizing) what I have said
before.
If you want to tear down my argument line by line, please do it privately
because I don't think the rest of the list will be interested.
--- Mark Waser <[EMAIL PROTECTED]> wrot
a, and all sorts of other problems.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Sunday, December 03, 2006 10:19 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it
and how fast?]
--- Mark Waser <[EMAIL PROTECTED]>
ot do this.
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Sunday, December 03, 2006 9:17 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/2/06, Mark Waser <[EMAIL PROTECTED]> wrote:
A nice story but it proves absolutely
You cannot turn off hunger or pain. You cannot
control your emotions.
Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
at the mercy of your emotions?
Since the synaptic weights cannot be altered by
training (classical or operant conditioning)
Who says that synapt
t an AGI is going to have
to be able to explain/be explained.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Saturday, December 02, 2006 5:17 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
--- Mark Waser <[EMAIL PRO
He's arguing with the phrase "It is programmed only through evolution."
If I'm wrong and he is not, I certainly am.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Saturday, December 02, 2006 4:26 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI
er 02, 2006 2:31 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/2/06, Mark Waser wrote:
My contention is that the pattern that it found was simply not translated
into terms you could understand and/or explained.
Further, and more importantly, the pattern matcher *doesn
On 12/1/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Philip Goetz snidely responded
Some people would call it "repeating the same mistakes I already dealt
I'd be interested in knowing if anyone else on this list has had any
experience with policy-based governing . . . .
Questions like
Are the following things good?
- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.
can
Thank you for cross-posting this. Could you please give us more information
on your book?
I must also say that I appreciate the common-sense wisdom and repeated bon
mots that the "sky is falling" crowd seem to lack.
- Original Message -
From: "J. Storrs Hall, PhD." <[EMAIL PROTECTED
Subject: Re: [agi] A question on the symbol-system hypothesis
On 11/30/06, Mark Waser <[EMAIL PROTECTED]> wrote:
With many SVD systems, however, the representation is more
vector-like
and *not* conducive to easy translation to human terms. I have two
answers
to these cases. Answer
Well, it really depends on what you mean by "too complex for a human
to understand." Do you mean
-- too complex for a single human expert to understand within 1 week of
effort
-- too complex for a team of human experts to understand within 1 year of
effort
-- fundamentally too complex for human
7;t
understand it :-).
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 29, 2006 9:36 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/29/06, Philip Goetz <[EMAIL PROTECTED]> wrote:
ation unless you get really, really lucky in choosing your number of
nodes and your connections. Nature has clearly found a way around this
problem but we do not know this solution yet.)
Mark (going off to be plastered by replies to last night's message)
- Original Message
bol-system hypothesis
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> If you look into the literature of the past 20 years, you will easily
> find several thousand examples.
I'm sorry but either you didn't understand my point or you don't know
what you are t
contending/assuming that I've overlooked several thousand examples is pretty
insulting).
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 29, 2006 4:17 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
m, but not what it has learned. If you could understand how it
arrived at a particular solution, then you have failed to create an AI
smarter than yourself.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wed
ns is that you can't see inside it, it only
seems like an invitation to disaster to me. So why is it a better design?
All that I see here is something akin to "I don't understand it so it must
be good".
- Original Message -
From: "Philip Goetz" <[EMA
ject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/14/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Even now, with a relatively primitive system like the current
> Novamente, it is not pragmatically possible to understand why the
> system does each thing it do
The problem is far worse than even James says. The 10^9 figure (at least,
the way Matt derives it) is just for the textual data that you read. That
data, however, probably does NOT have cognitive closure since your
understanding of it is *heavily* based upon your physical experiences.
-
ce it tries to make will
be wrong, regardless.
But that means that an architecture for AI will have to have a method for
finding these inconsistencies and correcting them with good effeciency.
James Ratcliff
Mark Waser <[EMAIL PROTECTED]> wrote:
>> I don't believ
has more,
and you try to explore the chain of reasoning, you will exhaust the memory
in your brain before you finish.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:16:54
As Eric Baum noted, in his book "What Is Thought?" he did not in fact
define intelligence or understanding as compression, but rather made a
careful argument as to why he believes compression is an essential
aspect of intelligence and understanding. You really have not
addressed his argument in y
ginal Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Thursday, November 16, 2006 3:01 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser <[EMAIL PROTECTED]> wrote:
Give me a counter-example of knowledge that can't be isolated
ssible agent, environment, universal Turing machine and pair of guessed
programs. I also don't believe Hutter's paper proved it to be a general trend
(by some reasonable measure). But I wouldn't doubt it.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From:
.listbox.com
Sent: Thursday, November 16, 2006 1:02 PM
Subject: Re: [agi] One grammar parser URL
whats your definition of diff of data and knowledge then?
Cyc uses a formal language based in logic to describe the things.
James
Mark Waser <[EMAIL PROTECTED]> wrote:
> However, i
w why the statement is irrelevant, or
d) concede the point?
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Thursday, November 16, 2006 11:52 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser <[EMAIL PROTECTED]>
wrote:
> However, it has not yet been as convincingly disproven as the Cyc-type
> approach of feeding a AI commonsense knowledge encoded in a formal
> language ;-)
Actually, I would describe the Cyc-type approach as feeding an AI common-sense
data which then begs all sorts of questions . . . .
- O
ause we are discarding irrelevant data. If we
anthropomorphise the agent, then we say that we are replacing the input with
perceptually indistinguishable data, which is what we typically do when we
compress video or sound.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser &
t; if that
article is all that you've seen on the topic (though one would have hoped that
an integrity check or a reality check would have prompted further evaluation --
particularly since the article itself mentions that that would require an
unreasonably/impossibly large amount of RAM.)
The connection between intelligence and compression is not obvious.
The connection between intelligence and compression *is* obvious -- but
compression, particularly lossless compression, is clearly *NOT*
intelligence.
Intelligence compresses knowledge to ever simpler rules because that is a
to understand why a driver made a left
turn by examining the neural firing patterns in the driver's brain.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:39:14 AM
Subject:
rain it. Trying to debug the reasoning for its behavior would
be like trying to understand why a driver made a left turn by examining the
neural firing patterns in the driver's brain.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]
that it can't store more information than this.
It doesn't matter if you agree with the number 10^9 or not. Whatever the
number, either the AGI stores less information than the brain, in which case
it is not AGI, or it stores more, in which case you can't know everything it
do
Mark Waser wrote:
Given sufficient time, anything should be able to be understood and
debugged.
Give me *one* counter-example to the above . . . .
Matt Mahoney replied:
Google. You cannot predict the results of a search. It does not help
that you have full access to the Internet
>> Models
that are simple enough to debug are too simple to
scale.
>> The contents of a knowledge base for AGI will be beyond our
ability to comprehend.
Given sufficient time, anything
should be able to be understood and debugged. Size alone does not make
something incomprehensible
Even now, with a relatively primitive system like the current
Novamente, it is not pragmatically possible to understand why the
system does each thing it does.
Pragmatically possible obscures the point I was trying to make with
Matt. If you were to freeze-frame Novamente right after it took
So, how to get all this probabilistic commonsense knowledge (which in
humans is mostly unconscious) into the AGI system?
a-- embodied learning
b-- exhaustive education through NLP dialogue in very simple English
c-- exhaustive education through dialogue in some artificial language
like Lojban++
d-
>> Although I understand, in vague terms, what idea Richard is
attempting to express, I don't see why having "massive numbers of weak
constraints" or "large numbers of connections from [the] motivational
system to [the] thinking system." gives any more reason to believe it is
reliably Friend
It is entirely possible to build an AI in such a way that the general
course of its behavior is as reliable as the behavior of an Ideal Gas:
can't predict the position and momentum of all its particles, but you sure
can predict such overall characteristics as temperature, pressure and
volume.
My position statement is: If in a sense a laptop computer is a Turing
machine, then in the same sense a robot is also a Turing machine.
I think that most people missing the point here . . . . Analog can
always be converted to digital of a specified granularity, simultaneous can
always be se
I just ran across the following
references in Neuro-Evolution (including evolving topologies in neural networks)
and figured that they might be interesting to others on this list:
http://nn.cs.utexas.edu/project-view.php?RECORD_KEY(Projects)=ProjID&ProjID(Projects)=14
http://www.
I would like to hear from others with this same point
of view, and otherwise from anyone who has a idea that
an open source AGI could be somehow made safe.
While I also don't believe that you can protect your open source AGI
from "what if [insert favorite bad guys] use it for nefarious purpo
27;t* believe that the 1GB corpus is big enough to learn most
of this knowledge *USING STATISTICAL METHODS*. I *do* believe that it is
large enough for other methods though.
----- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Monday, August 28, 2006 3:37 PM
S
- Original Message -
From: "Sampo Etelavuori" <[EMAIL PROTECTED]>
To:
Sent: Monday, August 28, 2006 8:56 AM
Subject: **SPAM** Re: [agi] Lossy *&* lossless compressi
On 8/28/06, Mark Waser <[EMAIL PROTECTED]> wrote:
How does a lossless model observ
> However, I think that a lossless model can
reasonably derive this information by observing that p(x, x') is approximately
equal to p(x) or p(x'). In other words, knowing both x and x' does not
tell you any more than x or x' alone, or CDM(x, x') ~ 0.5. I think this is
a reasonable way to
e representation of it.
-- Matt Mahoney, [EMAIL PROTECTED]
-----
Original Message From: Mark Waser <[EMAIL PROTECTED]>To:
agi@v2.listbox.comSent: Sunday, August 27, 2006 12:36:25 PMSubject:
Re: [agi] Lossy *&* lossless compression
Matt,
Unless you raise
converting Wikipedia to
canonical form.
-- Matt Mahoney, [EMAIL PROTECTED]
-
Original Message From: Mark Waser <[EMAIL PROTECTED]>To:
agi@v2.listbox.comSent: Sunday, August 27, 2006 11:30:44 AMSubject:
Re: [agi] Lossy *&* lossless compression
>> Supp
- Matt Mahoney, [EMAIL PROTECTED]
-
Original Message From: Mark Waser <[EMAIL PROTECTED]>To:
agi@v2.listbox.comSent: Saturday, August 26, 2006 8:52:27 PMSubject:
Re: [agi] Lossy *&* lossless compression
>> I
think that either putting Wikipedia in canonical form
dia in canonical form, or recognizing that it
is in canonical form, are two equally difficult problems. So the problem
does not go away easily.
-- Matt Mahoney, [EMAIL PROTECTED]
-
Original Message From: Mark Waser <[EMAIL PROTECTED]>To: agi@v2.listbox.comSent: Saturday,
Au
>> Mark suggested putting Wikipedia in a
canonical form, which would remove the distinction between lossless and lossy
compression.
Hmmm. Interesting . . . . Actually, I
didn't suggest exactly that -- though I can see how
you got that impression. I suggested that the decompression progr
as *nothing* to do with KNOWLEDGE
(though, maybe everything to do with judging).
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Friday, August 25, 2006 7:54 PM
Subject: Re: [agi] Lossy *&* lossless compression
- Original Message
However, a machine with a lossless model will still outperform one with a
lossy model because the lossless model has more knowledge.
PKZip has a lossless model. Are you claiming that it has more knowledge?
More data/information *might* be arguable but certainly not knowledge -- and
PKZip cert
>> Now
try that on my daughter or any other 3.5 year old. It doesnt work.
:}
Try what? Your daughter has calibrated her
vision and stuck labels on the gauge. What has she learned? That
this range reported by *her* personal vision system is labeled
yellow.
Now, you want to do this witho
em? Meanwhile, a "dumb" model like
matching query words to document words enables Google to answer natural
language queries, while our smart parsers choke when you misspell a
word. Who is smart and who is dumb? -- Matt Mahoney,
[EMAIL PROTECTED]
- Original Messag
Yellow is the state of reflecting light which is
between two specific frequencies.
Hot is the state of having a temperature above some
set value.
It takes examples to recognize/understand when your
sensory apparatus is reporting one of these states but this is a calibration
issue, not a
applied for a $50K NSF grant for a text
compression contest. It was rejected, so I started one without funding
(which we now have). The problem is that many people do not believe that
text compression is related to AI (even though speech recognition researchers
have been evaluating mo
t; <[EMAIL PROTECTED]>
To:
Sent: Tuesday, August 15, 2006 10:46 PM
Subject: **SPAM** Re: Goetz/Goertzel/Sampo: [agi] Marcus Hutter's lossless
compression of human knowledge prize
On 8/15/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> I think it would be more interesting for it
rm") < CDM("it is hot",
"it is cold").assuming your compressor uses a good language
model.Now if only we had some test to tell which compressors have the
best language models...
-- Matt Mahoney, [EMAIL PROTECTED]
-
Original Message From: Mark
ubject: **SPAM** Re: Goetz/Goertzel/Sampo: [agi] Marcus Hutter's lossless
compression of human knowledge prize
On 8/15/06, Mark Waser <[EMAIL PROTECTED]> wrote:
Actually, instructing the competitors to compress both the OpenCyc corpus
AND then the Wikipedia sample in sequence and mea
t; <[EMAIL PROTECTED]>
To:
Sent: Tuesday, August 15, 2006 3:16 PM
Subject: **SPAM** Re: Goertzel/Sampo: [agi] Marcus Hutter's lossless
compression of human knowledge prize
On 8/15/06, Mark Waser <[EMAIL PROTECTED]> wrote:
Ben >> Conceptually, a better (though still deeply fla
not imply
AI.>> A lossy text compressor that
did the same thing (recall it in paraphrased fashion) would certainly
demonstrate AI.I disagree that
these are inconsistent. Demonstrating and implying are different
things.-- Matt Mahoney, [EMAIL PROTECTED]
-
Origina
. -- Matt Mahoney,
[EMAIL PROTECTED]
-----
Original Message From: Mark Waser <[EMAIL PROTECTED]>To:
agi@v2.listbox.comSent: Tuesday, August 15, 2006 9:28:26 AMSubject:
Re: Mahoney/Sampo: [agi] Marcus Hutter's lossless compression of human
knowledge prize
>> I
don
ed) contest would be:
Compress this file of advanced knowledge, assuming as background
knowledge this other file of elementary knowledge, in terms of which
the advanced knowledge is defined.
-- Ben G
On 8/15/06, Philip Goetz <[EMAIL PROTECTED]> wrote:
On 8/15/06, Mark Waser <[EMAIL PROT
pressing a bunch of other files
... and storing the knowledge it learned via this experience in its
long-term memory...
-- Ben
On 8/15/06, Mark Waser <[EMAIL PROTECTED]> wrote:
Hi Ben,
I agree with everything that you're saying; however, looking at the
specific task:
Create a compres
>> I
don't see any point in this debate over lossless vs. lossy
compression
Lets see if I can simplify it.
The stated goal is compressing human
knowledge.
The exact, same knowledge can always be expressed
in a *VERY* large number of different bit strings
Not being able to reprod
Hi Ben,
I agree with everything that you're saying; however, looking at the
specific task:
Create a compressed version (self-extracting archive) of the 100MB file
enwik8 of less than 18MB. More precisely:
a.. Create a Linux or Windows executable archive8.exe of size S < L :=
18'324'887 =
ession of human knowledge prize
On 8/14/06, Mark
Waser <[EMAIL PROTECTED]> wrote:
>> The storage of those inferences is not included in results as far as
I know. I haven't really read the rules but . . . .
You should read the rules before
trary to the stated
rules.
Mark
- Original Message -
From:
Sampo Etelavuori
To: agi@v2.listbox.com
Sent: Monday, August 14, 2006 5:53
PM
Subject: **SPAM** Re: Sampo: [agi] Marcus
Hutter's lossless compression of human knowledge prize
On 8/14/06, Ma
- Original Message -
From:
Sampo Etelavuori
To: agi@v2.listbox.com
Sent: Monday, August 14, 2006 1:06
PM
Subject: **SPAM** Re: [agi] Marcus
Hutter's lossless compression of human knowledge prize
On 8/14/06, Mark
Waser <[EMAIL PROTECTED]> wrote:
>
ong, plus the cost of the
code for "person" based on its overall frequency in the vocabulary.
The smart compressor first predicts "human", then predicts a class of words
semantically related to "human". The coding cost of the initial error is
the same (the incompr
the incompressible part), but the additional cost of coding from
the same semantic class is smaller than coding from the entire vocabulary.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, August 13,
ineteenth International Joint Conference on
Artificial Intelligence (IJCAI-05), 1136-1141, Edinburgh, Scotland, 2005.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, August 13, 2006 5:25:19 PM
Subject: Re:
I think the Hutter prize will lead to a better understading of how we
learn semantics and syntax.
I have to disagree strongly. As long as you a requiring recreation at the
bit level as opposed to the semantic or logical level, you aren't going to
learn much at all about semantics or syntax (o
701 - 800 of 834 matches
Mail list logo