Josh,
Thank you very much for the pointers (and replying so rapidly).
You're very right that people misinterpret and over-extrapolate econ and
game
theory, but when properly understood and applied, they are a valuable tool
for analyzing the forces shaping the further evolution of AGIs and i
lp with the documentation . . . . then the C#.
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc:
Sent: Monday, May 26, 2008 10:10 AM
Subject: Re: Mark Waser arguing that OpenCog should be recoded in .Net ;-p
Mark,
If it were
On Sunday 25 May 2008 10:06:11 am, Mark Waser wrote:
> Read the appendix, p37ff. He's not making arguments -- he's explaining,
> with a
> few pointers into the literature, some parts of completely standard and
> accepted economics and game theory. It's all very bas
My only real quibble was with the notion that choosing .NET would not
have a material impact on developer participation.
So after all your bluster and BS, you're down to fighting a strawman because
you can't defend anything else that you've claimed?
Where did I claim that .Net would not hav
Which "silly" operating system assertions?
Your own words:
but if you actually look at some rather important tech centers like
Silicon Valley, there is not a Windows server in sight. The dominance of
Unix-based systems there is so complete that it is not even a contest any
more. You are ap
And again, *thank you* for a great pointer!
- Original Message -
From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, May 27, 2008 8:04 AM
Subject: Re: [agi] Goal Driven Systems and AI Dangers [WAS Re: Singularity
Outcomes...]
On Monday 26 May 2008 0
day, May 27, 2008 10:00 AM
Subject: Re: [agi] More Info Please
On Tue, May 27, 2008 at 2:20 PM, Mark Waser wrote:
Geez. What the heck is wrong with you people and your seriously bogus
stats?
Try a real recognized neutral tracking service like Netcraft
(http://news.netcraft.com/archives/web
No Mark. It is partly the result of a deliberate MS policy to make
their market share look bigger than it actually is.
Yes, of course, it's all a Microsoft plot.
Always remember,
the main thing that MS is good at is marketing.
And everyone who uses Microsoft is too stupid to see how inferior
2008 10:35 AM
Subject: **SPAM** RE: [agi] More Info Please
Mark Waser:
> Does anybody have any interest in and/or willingness to program in a
> different environment?
I haven't decided to what extent I'll participate in OpenCog myself yet. For
me, it depends more on whether
CAD/CAM programs (and many others) allow the fairly simple input of complex
objects like the car in your image and can then create an image of the
described object from any view and distance and have done so for years.
The problems are in object isolation (what your first URL dealt with) and in
>> Mark, your reception would be warmer if your behavior was less incessantly
>> abrasive and trollish.
I can accept abrasive (since I do get frustrated with bad science, etc.) but
believe that trollish is rather unfair . . . .
>> I think it's a good idea to work on a .NET implementation, a
- Original Message -
From: Jim Bromer
Subject: [agi] Ideological Interactions Need to be Studied
An excellent thoughtful post. Thank you!
During the past few years, I have often made critical remarks about AI
theories that suggested that some basic method, and especially some
rather
Sunday, June 01, 2008 2:45 PM
Subject: Re: [agi] Ideological Interactions Need to be Studied
On Jun 1, 2008, at 11:02 AM, Mark Waser wrote:
One is elegance. It would be "oh, so nice" to find one idea that would
solve the entire problem. After all, everyone knows that the single
AIL PROTECTED]>
To:
Sent: Sunday, June 01, 2008 3:22 PM
Subject: Re: [agi] Ideological Interactions Need to be Studied
On Jun 1, 2008, at 12:17 PM, Mark Waser wrote:
Neurons are *NOT* simple. There are all sorts of physiological features
that affect their behavior, etc. While I totally
)?
- Original Message -
From: "J. Andrew Rogers" <[EMAIL PROTECTED]>
To:
Sent: Sunday, June 01, 2008 4:32 PM
Subject: Re: [agi] Ideological Interactions Need to be Studied
On Jun 1, 2008, at 12:39 PM, Mark Waser wrote:
What do you mean by computationally simple?
Interactions Need to be Studied
On Jun 1, 2008, at 1:44 PM, Mark Waser wrote:
So . . . . given that the biological neurons have all this additional
complexity that I have listed before, are you going to attempt to
implement it or are you going to declare it as unnecessary (with the
potent
logical Interactions Need to be Studied
On Jun 1, 2008, at 3:03 PM, Mark Waser wrote:
I find it very interesting that you can't even answer a straight yes-
or-no question without resorting to obscuring BS and inventing strawmen.
By "obscuring BS and inventing strawmen" I assu
imir Nesov" <[EMAIL PROTECTED]>
To:
Sent: Monday, June 02, 2008 12:01 PM
Subject: Re: [agi] Ideological Interactions Need to be Studied
On Mon, Jun 2, 2008 at 6:27 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
But, why "SHOULD" there be a *simple* model that
To believe that you need
something more complex, you need evidence.
Yes, and the evidence that you need something more complex is overwhelming
in this case (if you have anywhere near adequate knowledge of the field).
- Original Message -
From: "Vladimir Nesov" <[EMAIL PROTECTED]>
T
n 2, 2008 at 10:23 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
At any rate, as Richard points out, y'all are so far from reality that
arguing with you is not a wise use of time. Do what you want to do. The
proof will be in how far you get.
I don't know what you mean. This particu
d
On Mon, Jun 2, 2008 at 11:06 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
To believe that you need
something more complex, you need evidence.
Yes, and the evidence that you need something more complex is
overwhelming
in this case (if you have anywhere near adequate knowledge of t
ied
On Tue, Jun 3, 2008 at 12:04 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
Good luck with your blank slate AI.
Remember about the blank slate evolution...
--
Vladimir Nesov
[EMAIL PROTECTED]
---
agi
Archives: http://www.listbox.com/member
Consciousness clearly requires feedback - read Hofstadter's I Am A Strange Loop
(http://www.amazon.com/Am-Strange-Loop-Douglas-Hofstadter/dp/0465030793/ref=sr_11_1?ie=UTF8&qid=1212604343&sr=11-1)
Recordings of consciousness are not consciousness in the same way that a CD is
not music.
- Or
Richard said
But I have no problem with this at all! :-). This is exactly what I
believe, but I was arguing against a different claim! Rogers did
actually
say that "neurons are simple" and then went on to claim that they were
simple because (essentially) you could black-box them with som
Hi Steve,
I'm thinking about the solution to the "Friendliness" problem, and in
particular desperately need to finish my paper on it for the AAAI Fall
Symposium that is due by next Sunday.
What I would suggest, however, is that quickly formatted e-mail postings are
exactly the wrong method for
Isn't your Nirvana trap exactly equivalent to Pascal's Wager? Or am I
missing something?
- Original Message -
From: "J Storrs Hall, PhD" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, June 11, 2008 10:54 PM
Subject: Re: [agi] Nirvana
On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov
D]>
To:
Sent: Thursday, June 12, 2008 11:24 AM
Subject: Re: [agi] Nirvana
If you have a program structure that can make decisions that would
otherwise
be vetoed by the utility function, but get through because it isn't
executed
at the right time, to me that's just a bug.
Josh
On
er: neither one represents
a
principled, trustable solution that allows for true moral development and
growth.
Josh
On Thursday 12 June 2008 11:38:23 am, Mark Waser wrote:
You're missing the *major* distinction between a "program structure that
can
make decisions that would otherwise be vet
Most people are about as happy as they make up their minds to be.
-- Abraham Lincoln
In our society, after a certain point where we've taken care of our
immediate needs, arguably we humans are and should be subject to the Nirvana
effect.
Deciding that you can settle for something (if your sub
problems are moral
ones, how to live in increasingly complex societies without killing each
other, and so forth. That's why it matters that an AGI be morally
self-improving as well as intellectually.
pax vobiscum,
Josh
On Friday 13 June 2008 12:29:33 pm, Mark Waser wrote:
Most people are a
Yes, but I strongly disagree with assumption one. Pain avoidance and
pleasure are best viewed as status indicators, not goals.
- Original Message -
From: "Jiri Jelinek" <[EMAIL PROTECTED]>
To:
Sent: Friday, June 13, 2008 3:42 PM
Subject: Re: [agi] Nirvana
Mark,
Assuming that
a) p
) pain avoidance and pleasure seeking are our primary driving forces;
On Fri, Jun 13, 2008 at 3:47 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
Yes, but I strongly disagree with assumption one. Pain avoidance and
pleasure are best viewed as status indicators, not goals.
Pain and pleasure [le
Ben, may I request that you request that this conversation be moved to sl4? It
is much more appropriate there.
- Original Message -
From: bwxfi obgwyg
To: agi@v2.listbox.com
Sent: Saturday, June 14, 2008 10:09 AM
Subject: [agi] World domination, but not killing grandchildren
My point is simply that an AGI should be able to think about such
concepts, like we do. It doesn't need to solve them. In this sense I
think it is a fundamental concern: how is it possible to have a form
of knowledge representation that can in principle capture all ideas a
human might express? Int
Solving the problem of how to use natural language
would (very probably) be equivalent to solving the problem of AGI.
I agree with you -- but I would point out that language is a very concrete
thing and gives us something to experiment upon, work with, and be guided by
on the road to AGI.
It
Mark Waser wrote:
Given sufficient time, anything should be able to be understood and
debugged.
Give me *one* counter-example to the above . . . .
Matt Mahoney replied:
Google. You cannot predict the results of a search. It does not help
that you have full access to the Internet
that it can't store more information than this.
It doesn't matter if you agree with the number 10^9 or not. Whatever the
number, either the AGI stores less information than the brain, in which case
it is not AGI, or it stores more, in which case you can't know everything it
do
rain it. Trying to debug the reasoning for its behavior would
be like trying to understand why a driver made a left turn by examining the
neural firing patterns in the driver's brain.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]
to understand why a driver made a left
turn by examining the neural firing patterns in the driver's brain.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 15, 2006 9:39:14 AM
Subject:
The connection between intelligence and compression is not obvious.
The connection between intelligence and compression *is* obvious -- but
compression, particularly lossless compression, is clearly *NOT*
intelligence.
Intelligence compresses knowledge to ever simpler rules because that is a
t; if that
article is all that you've seen on the topic (though one would have hoped that
an integrity check or a reality check would have prompted further evaluation --
particularly since the article itself mentions that that would require an
unreasonably/impossibly large amount of RAM.)
ause we are discarding irrelevant data. If we
anthropomorphise the agent, then we say that we are replacing the input with
perceptually indistinguishable data, which is what we typically do when we
compress video or sound.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser &
> However, it has not yet been as convincingly disproven as the Cyc-type
> approach of feeding a AI commonsense knowledge encoded in a formal
> language ;-)
Actually, I would describe the Cyc-type approach as feeding an AI common-sense
data which then begs all sorts of questions . . . .
- O
w why the statement is irrelevant, or
d) concede the point?
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Thursday, November 16, 2006 11:52 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser <[EMAIL PROTECTED]>
wrote:
.listbox.com
Sent: Thursday, November 16, 2006 1:02 PM
Subject: Re: [agi] One grammar parser URL
whats your definition of diff of data and knowledge then?
Cyc uses a formal language based in logic to describe the things.
James
Mark Waser <[EMAIL PROTECTED]> wrote:
> However, i
ssible agent, environment, universal Turing machine and pair of guessed
programs. I also don't believe Hutter's paper proved it to be a general trend
(by some reasonable measure). But I wouldn't doubt it.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From:
ginal Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Thursday, November 16, 2006 3:01 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser <[EMAIL PROTECTED]> wrote:
Give me a counter-example of knowledge that can't be isolated
As Eric Baum noted, in his book "What Is Thought?" he did not in fact
define intelligence or understanding as compression, but rather made a
careful argument as to why he believes compression is an essential
aspect of intelligence and understanding. You really have not
addressed his argument in y
has more,
and you try to explore the chain of reasoning, you will exhaust the memory
in your brain before you finish.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, November 16, 2006 3:16:54
ce it tries to make will
be wrong, regardless.
But that means that an architecture for AI will have to have a method for
finding these inconsistencies and correcting them with good effeciency.
James Ratcliff
Mark Waser <[EMAIL PROTECTED]> wrote:
>> I don't believ
The problem is far worse than even James says. The 10^9 figure (at least,
the way Matt derives it) is just for the textual data that you read. That
data, however, probably does NOT have cognitive closure since your
understanding of it is *heavily* based upon your physical experiences.
-
ject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/14/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Even now, with a relatively primitive system like the current
> Novamente, it is not pragmatically possible to understand why the
> system does each thing it do
ns is that you can't see inside it, it only
seems like an invitation to disaster to me. So why is it a better design?
All that I see here is something akin to "I don't understand it so it must
be good".
- Original Message -
From: "Philip Goetz" <[EMA
m, but not what it has learned. If you could understand how it
arrived at a particular solution, then you have failed to create an AI
smarter than yourself.
-- Matt Mahoney, [EMAIL PROTECTED]
- Original Message
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wed
contending/assuming that I've overlooked several thousand examples is pretty
insulting).
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 29, 2006 4:17 PM
Subject: Re: Re: Re: [agi] A question on the symbol-system hypothesis
bol-system hypothesis
On 11/29/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> If you look into the literature of the past 20 years, you will easily
> find several thousand examples.
I'm sorry but either you didn't understand my point or you don't know
what you are t
ation unless you get really, really lucky in choosing your number of
nodes and your connections. Nature has clearly found a way around this
problem but we do not know this solution yet.)
Mark (going off to be plastered by replies to last night's message)
- Original Message
7;t
understand it :-).
- Original Message -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Wednesday, November 29, 2006 9:36 PM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 11/29/06, Philip Goetz <[EMAIL PROTECTED]> wrote:
Well, it really depends on what you mean by "too complex for a human
to understand." Do you mean
-- too complex for a single human expert to understand within 1 week of
effort
-- too complex for a team of human experts to understand within 1 year of
effort
-- fundamentally too complex for human
Subject: Re: [agi] A question on the symbol-system hypothesis
On 11/30/06, Mark Waser <[EMAIL PROTECTED]> wrote:
With many SVD systems, however, the representation is more
vector-like
and *not* conducive to easy translation to human terms. I have two
answers
to these cases. Answer
Thank you for cross-posting this. Could you please give us more information
on your book?
I must also say that I appreciate the common-sense wisdom and repeated bon
mots that the "sky is falling" crowd seem to lack.
- Original Message -
From: "J. Storrs Hall, PhD." <[EMAIL PROTECTED
I'd be interested in knowing if anyone else on this list has had any
experience with policy-based governing . . . .
Questions like
Are the following things good?
- End of disease.
- End of death.
- End of pain and suffering.
- A paradise where all of your needs are met and wishes fulfilled.
can
On 12/1/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:
The questions you asked above are predicated on a goal stack approach.
You are repeating the same mistakes that I already dealt with.
Philip Goetz snidely responded
Some people would call it "repeating the same mistakes I already dealt
er 02, 2006 2:31 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/2/06, Mark Waser wrote:
My contention is that the pattern that it found was simply not translated
into terms you could understand and/or explained.
Further, and more importantly, the pattern matcher *doesn
He's arguing with the phrase "It is programmed only through evolution."
If I'm wrong and he is not, I certainly am.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Saturday, December 02, 2006 4:26 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI
t an AGI is going to have
to be able to explain/be explained.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Saturday, December 02, 2006 5:17 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
--- Mark Waser <[EMAIL PRO
You cannot turn off hunger or pain. You cannot
control your emotions.
Huh? Matt, can you really not ignore hunger or pain? Are you really 100%
at the mercy of your emotions?
Since the synaptic weights cannot be altered by
training (classical or operant conditioning)
Who says that synapt
ot do this.
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Sunday, December 03, 2006 9:17 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/2/06, Mark Waser <[EMAIL PROTECTED]> wrote:
A nice story but it proves absolutely
a, and all sorts of other problems.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Sunday, December 03, 2006 10:19 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it
and how fast?]
--- Mark Waser <[EMAIL PROTECTED]>
ntil we build AGI, we really
won't know. I realize I am repeating (summarizing) what I have said
before.
If you want to tear down my argument line by line, please do it privately
because I don't think the rest of the list will be interested.
--- Mark Waser <[EMAIL PROTECTED]> wrot
Ben,
I agree with the vast majority of what I believe that you mean but . . .
1) Just because a system is "based on logic" (in whatever sense you
want to interpret that phrase) doesn't mean its reasoning can in
practice be traced by humans. As I noted in recent posts,
probabilistic logic sy
Whereas my view is that nearly all HUMAN decisions are based on so
many entangled variables that the human can't hold them in conscious
comprehension ;-)
We're reaching the point of agreeing to disagree except . . . .
Are you really saying that nearly all of your decisions can't be explained
(
age -
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 10:45 AM
Subject: Re: Re: [agi] A question on the symbol-system hypothesis
On 12/4/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Philip Goetz gave an example of an intrusion detection s
> Well, of course they can be explained by me -- but the acronym for
> that sort of explanation is "BS"
I take your point with important caveats (that you allude to). Yes, nearly all
decisions are made as reflexes or pattern-matchings on what is effectively
compiled knowledge; however, it is th
You partition intelligence into
* explanatory, declarative reasoning
* reflexive pattern-matching (simplistic and statistical)
Whereas I think that most of what happens in cognition fits into
neither of these categories.
I think that most unconscious thinking is far more complex than
"reflexive
back to the original argument?
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 1:38 PM
Subject: Re: Motivational Systems of an AI [WAS Re: [agi] RSI - What is it
and how fast?]
On 12/4/06, Mark Waser <[EM
list don't even agree on what it means much less what it's
implications are . . . .
- Original Message -
From: "Philip Goetz" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 2:03 PM
Subject: Re: [agi] A question on the symbol-system hypothesis
On 12/
To allow that somewhere in the Himalayas, someone may be able,
with years of training, to lessen the urgency of hunger and
pain, is not sufficient evidence to assert that the proposition
that not everyone can turn them off completely is insensible.
The first sentence of the proposition was exact
age -
From: "William Pearson" <[EMAIL PROTECTED]>
To:
Sent: Monday, December 04, 2006 5:51 PM
Subject: [agi] Addiction was Re: Motivational Systems of an AI
On 04/12/06, Mark Waser <[EMAIL PROTECTED]> wrote:
> Why must you argue with everything I say? Is this not a s
o be congruent with them (and even
more so in well-balanced and happy individuals).
- Original Message -
From: "BillK" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, December 05, 2006 7:03 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
On 12/4/06,
rsation is "Called "The Emotion Machine," it
argues that, contrary to popular conception, emotions aren't distinct from
rational thought; rather, they are simply another way of thinking, one that
computers could perform."
----- Original Message -
From: "M
From: James Ratcliff
To: agi@v2.listbox.com
Sent: Tuesday, December 05, 2006 11:17 AM
Subject: Re: Re: Re: Re: [agi] A question on the symbol-system hypothesis
BillK <[EMAIL PROTECTED]> wrote:
On 12/4/06, Mark Waser wrote:
>
> Explaining our actions is the re
mber 05, 2006 11:34 AM
Subject: Re: [agi] A question on the symbol-system hypothesis
Mark Waser <[EMAIL PROTECTED]> wrote:
> Are
> you saying that the more excuses we can think up, the more intelligent
> we are? (Actually there might be something in that!).
S
If there's a market for this, then why can't I even buy a thermostat
with a timer on it to turn the temperature down at night and up in the
morning? The most basic home automation, which could have been built
cheaply 30 years ago, is still, if available at all, so rare that I've
never seen it.
What is the best language for AI begs the question --> For which aspect of
AI?
And also --> What are the requirements of *this particular part* of your AI
and who is programming it.
Far and away, the best answer to the best language question is the .NET
framework. If you're using the framewo
e ...". Its a general comment to not reinvent wheels. If the
wheel doesn't fit perfectly, you can build an "adapter" for it.
Bottom line ... Pei is correct. There will not be a consensus on what
the most
suitable language is for AI.
Regards,
~Aki
On 18-Feb-07, at 11:3
of day. Didn't you learn anything from the experience?
- Original Message -
From: "Eliezer S. Yudkowsky" <[EMAIL PROTECTED]>
To:
Sent: Sunday, February 18, 2007 12:51 PM
Subject: **SPAM** Re: Languages for AGI [WAS Re: [agi] Priors and indefinite
probabilities]
k at your
subsequent email to Eliezer. Come on man. Lighten up a little.
Everyone else ... I apologize for taking your time to read this email.
I'm just hoping it'll make anyone from flaming people and calling them
stupid.
Enough said. I think we can all get along, and learn somet
One reason for picking a language more powerful than the run-of-the-mill
imperative ones (of which virtually all the ones mentioned so far are just
different flavors) is that the can give you access to different paradigms
that will enhance your view of how an AGI should work internally.
Very tru
ssage -
From: "Samantha Atkins" <[EMAIL PROTECTED]>
To:
Sent: Sunday, February 18, 2007 10:22 PM
Subject: Re: Languages for AGI [WAS Re: [agi] Priors and indefinite
probabilities]
Mark Waser wrote:
And, from a practical programmatic way of having code generate code,
thos
Unluckily, after being involved in .Net for quite some time, I do not
share your optimism. In fact I came to think that .Net is not suitable
for anything that requires really high performance and parallelism.
Perhaps the problem is just that it is very very hard to build a really
good VM and proba
My real point is that you don't really need a new dev env for this.
Richard is talking about some *substantial* architecture here -- not
just a development environment but a *lot* of core library routines (as you
later speculate) and functionality that is either currently spread across
man
into the database itself and operate on it there.
- Original Message -
From: Russell Wallace
To: agi@v2.listbox.com
Sent: Tuesday, February 20, 2007 3:31 PM
Subject: **SPAM** Re: [agi] Development Environments for AI (a few
non-religious comments!)
On 2/20/07, Mark Waser
e -
From: Russell Wallace
To: agi@v2.listbox.com
Sent: Tuesday, February 20, 2007 5:23 PM
Subject: **SPAM** Re: **SPAM** Re: [agi] Development Environments for AI (a
few non-religious comments!)
On 2/20/07, Mark Waser <[EMAIL PROTECTED]> wrote:
I think that you grossly un
y
integrate. If the second number isn't a lot larger than the first, you're not
living in my world.:-)
- Original Message -
From: Russell Wallace
To: agi@v2.listbox.com
Sent: Tuesday, February 20, 2007 6:02 PM
Subject: **SPAM** Re: [agi] Development Environments
I am pretty confident that the specialized indices we use (implemented
directly in C++) are significantly faster than implementing comparable
indices in an enterprise DB would be.
Wow. You've floored me given that indexes are key to what enterprise DBs do
well. What are the special requireme
I think you're exaggerating the issue. Porting the NM code from 32->64
bit was a pain but not a huge deal, certainly a trivial % of the total
work done on the project.
A man-month is a healthy chunk of time/effort (in a field that is starved
for it); however, if it were the only instance, I w
I am pretty confident that the specialized indices we use (implemented
directly in C++) are significantly faster than implementing comparable
indices in an enterprise DB would be.
I'm not sure what discussion of databases has anything to do with AGI.
The discussion started with a development
OT* worth the costs (money, time, effort,
and frustration).
- Original Message -
From: Russell Wallace
To: agi@v2.listbox.com
Sent: Wednesday, February 21, 2007 9:33 AM
Subject: **SPAM** Re: [agi] Development Environments for AI (a few
non-religious comments!)
On 2/21/07,
hey have to be concerned with
much
more than just simple indexing. I know for a fact that my indexes are as
fast or faster than any widely used software DB on the same hardware.
Optimization on DB's is done at a higher level than an index in any case.
David Clark
- Original Messag
401 - 500 of 834 matches
Mail list logo