On Oct 18, 2007, at 11:32 PM, John G. Rose wrote:
It's really hard to sell if the deliverable time frame exceeds 3 to
4 years.
Why does an AGI deliverable require more than 3-4 years? You better
have a good answer for that, or no one will fund you. Most people
*don't* have a good
In response to Vladimir Nesovs Fri 10/19/2007 5:28 AM post.
Nesov Edward,
Nesov Does your estimate consider only amount of information required
for *representation*, or it also includes additional processing elements
required in neural setting to implement learning?
EWP The large numbers I
Josh: People learn best when they recieve simple, progressive, unambiguous
instructions or examples. This is why young humans imprint on
parent-figures,
have heroes, and so forth -- heuristics to cut the clutter and reduce
conflict of examples. An AGI that was trying to learn from the Internet
On Friday 19 October 2007 01:30:43 pm, Mike Tintner wrote:
Josh: An AGI needs to be able to watch someone doing something and produce a
program such that it can now do the same thing.
Sounds neat and tidy. But that's not the way the human mind does it.
A vacuous statement, since I stated
Josh: An AGI needs to be able to watch someone doing something and produce a
program
such that it can now do the same thing.
Sounds neat and tidy. But that's not the way the human mind does it. We
start from ignorance and confusion about how to perform any given skill/
activity - and while we
Well, one problem is that the current mathematical definition of general
intelligence
is exactly that -- a definition of totally general intelligence, which is
unachievable
by any finite-resources AGI system...
On the other hand, IQ tests and such measure domain-specific capabiities as
much
as
John: I think that there really needs to be more very specifically defined
quantitative measures of intelligence. ...Other qualities like creativity
and imagination would need to be
measured in other ways.
The only kind of intelligence you can measure with any precision is narrow
AI -
Josh,
Great post. Warrants being read multiple times.
You said.
JOSH I'm working on a formalism that unifies a very high-level
programming language (whose own code is a basic datatype, as in lisp),
spreading-activation semantic-net-like representational structures, and
subsumption-style
I think that there really needs to be more very specifically defined
quantitative measures of intelligence. If there were questions that could be
asked of an AGI that would require x units of intelligence to solve
otherwise they would be unsolvable. I know that this is a hopeless foray on
this
There's a really nice blog at
http://karmatics.com/docs/evolution-and-wisdom-of-crowds.html talking about
the intuitiveness (or not) of evolution-like systems (and a nice glimpse of
his Netflix contest entry using a Kohonen-like map builder).
Most of us here understand the value of a market or
From: J. Andrew Rogers [mailto:[EMAIL PROTECTED]
AGI is poorly suited for venture capital in every case I can think
of. Ignoring everything else, it tends to leave the venture
constantly begging for capital which has serious consequences on
performance and reputation. It is a Catch-22,
Edward,
Does your estimate consider only amount of information required for
*representation*, or it also includes additional processing elements
required in neural setting to implement learning? I'm not sure 10^9 is far
off, because much more can be required for domain-independent
From: J. Andrew Rogers [mailto:[EMAIL PROTECTED]
Subject: Re: [agi] More public awarenesss that AGI is coming fast
Why does an AGI deliverable require more than 3-4 years? You better
have a good answer for that, or no one will fund you. Most people
*don't* have a good answer for that.
Ben wrote:
Having said that, I would still prefer to avoid the VC route for Novamente.
An other route that Novamente is apparently exploring, is that of open
source development, with OpenCog. It will be very interesting to see
how it pans out, what level of interest and involvement from the
14 matches
Mail list logo