Saturday, November 9, 2002, 2:27:43 PM, Ben Goertzel wrote:
BG No question there! Proto-AGI-based applications can certainly outperform
BG narrow-AI-based applications in some areas.
BG The problem is that in many market niches, performance (in the sense of
BG intelligent performance) doesn't
(via Metafilter)
http://www.captcha.net/ aims to provide automated testing to
distinguish humans from bots on the web to prevent e.g. ballot
stuffing at online opinion polls, spammers automating yahoo mail
signups, and the like.
NYTimes article at
Thursday, December 12, 2002, 1:19:47 AM, Cliff Stabbert wrote:
A clarification:
CS I'm not quite getting their generic space and
CS blended space concepts; it all seems a bit forced and
CS overabstracted.
In their monk-mountain and regatta-race examples I get the mental
overlapping of the same
Thursday, December 26, 2002, 4:44:25 PM, Alan Grimes wrote:
AG A human level intelligence requires arbitrary acess to
AG visual/phonetic/other faculties in order to be intelligent.
In order to communicate intelligently and intelligibly with us, yes.
In order to _be_ intelligent, no.
AG A system
Friday, December 27, 2002, 5:15:40 AM, Shane Legg wrote:
SL One other thing; if one really is focused on natural language
SL learning why not make things a little easier and use an artificial
SL language like Esperanto? Unlike like highly artificial languages
SL like logic based or maths based
Friday, January 3, 2003, 11:37:15 PM, Mike Deering wrote:
MDThe intelligence of computer software keeps constant with the
MDcapability of the $1000 desktop.
I strongly disagree. The intelligence of computer software has
remained pretty constant. The feature lists (and memory, disk and
Friday, January 10, 2003, 10:36:35 PM, Kevin Copple wrote:
KC Well, my The Next Wave post was intended to be humorous. I not that much
KC of a comedian, so I may have weighed in too heavily on apparently serious.
KC Let me apologize to the extent it was a feebly frivolous failure.
The line
Tuesday, February 11, 2003, 9:44:31 PM, Shane Legg wrote:
SL However even within this scenario the concept of fixed goal is
SL something that we need to be careful about. The only real goal
SL of the AIXI system is to get as much reward as possible from its
SL environment. A goal is just our
Wednesday, February 12, 2003, 12:00:56 AM, Shane Legg wrote:
SL Yes, if the universe is not somehow predicatble in the sense of
SL being compressible then the AI will be screwed. It doesn't have
SL to be prefectly predictable; it just can't be random noise.
Thanks for your responses.
Some more
Saturday, February 15, 2003, 1:25:19 PM, Daniel Colonnese wrote:
DC A few days ago this article came out:
DC http://www.israel21c.org/bin/en.jsp?enPage=BlankPageenDisplay=viewenDi
DC spWhat=objectenDispWho=Articles%5El306enZone=TechnologyenVersion=0
DC A company called Meganet is claiming that
Thursday, February 20, 2003, 10:58:57 AM, Ben Goertzel wrote:
BG OK... I can see that I formulated the problem too formally for a lot of
BG people
BG I will now rephrase it in the context of a specific test problem.
snip
BG I don't know if this test problem will clarify things or confuse them
Thursday, February 20, 2003, 2:25:54 PM, Ben Goertzel wrote:
BG The basic situation can be thought of as follows.
snip
Thanks, this does clarify things a lot. Your first statement of the
problem did leave some things out though...but, perhaps
unsurprisingly, I'm still a bit puzzled.
I don't
Thursday, February 20, 2003, 8:11:48 PM, Ben Goertzel wrote:
CS Somehow I see this ending up as finding a set a bell curves (i.e.
CS their height, spread and optimum) for each estimate. That is to say I
CS don't see *just* the probability as relevant but the probability
CS distribution...if I
Monday, February 24, 2003, 6:08:43 PM, Ben Goertzel wrote:
BG Hmmm...
BG So, I'm thinking: The human brain is wired to do a lot of abstract cognition
BG in terms of metaphorical maps of the environment, and these are tied in with
BG macro-world classical physics
I think the spatial/visual
Monday, February 24, 2003, 8:24:22 PM, Ben Goertzel wrote:
BG I wrote, pertaining to problems of positive feedback causing erroneous or
BG uncontrollable dynamics:
The fact that similar problems occur in Novamente inference as well as in
the brain, suggests that they're general
Tuesday, February 25, 2003, 11:19:50 AM, Brad Wyble wrote:
CS So what I've been picturing is that organisms, in evolving, are
CS absorbing complexity from the Universe around them. And although I
CS used to think evolution always strives for more complexity, lately I
CS see this a bit
Tuesday, February 25, 2003, 10:12:07 AM, Ben Goertzel wrote:
BG I view complexity as part of a web of concepts that also, centrally,
BG includes pattern
BG Roughly, an entity is complex if there are a whole lot of patterns in it
BG (statically or dynamically)
BG On the other hand, what is a
Tuesday, February 25, 2003, 12:46:12 PM, Brad Wyble wrote:
BW Consider the following thought experiment: a computer able to
BW simulate the earth down to an atomic level (let's put aside the
BW possibility that quantum phenomena influence events on the scale
BW of earth-life). This system has 1
Monday, March 3, 2003, 1:18:18 AM, Kevin Copple wrote:
KC Here's an idea (if it hasn't already been considered before):
KC In each executing component of an AGI has Control Code. This Control Code
KC monitors a Big Red Switch. If the switch is turned on, execution proceeds
KC normally. If
Hi, Brad Wyble wrote:
BW I use elm so I couldn't tell, was there a virus riding on that?
BW
BW Just curious.
Having read the core of the message being forwarded, I was a bit
suspicious. It didn't make sense as a virus, it didn't make sense to
me that people forward it.
See
Ben wrote:
BG Oddly, the biggest problem with these robots, in terms of AGI, is probably
BG the limited battery life (for the robot, and the onboard laptop, whose
BG batteries run down fast when running the robot control software). Half an
BG hour is not enough time to learn much, and no human
arnoud wrote:
a Well, to name internal events, there have to be rules/criteria for correct use
a of a term/name, like always for any use of language. How do you know how to
a use the word 'pain' for example, how did you learn its rule(s)? Well probably
a from your parents who told you you were
arnoud, first off let me apologize for my tone (not necessarily my
content) towards the end. I shouldn't post when I'm out of coffee ;)
arnoud:
a Yes, of course. Pain without pain behaviour, and also pain
a behaviour without pain can happen. The point is that if that were
a the case most of the
Hi, you wrote:
a I don't quite follow this train of thought. Physicalism to me is
a justified by the success of the natural sciences with their
a materialist ontology of atoms, electrons, molecules and so on.
It is entirely possible to use the scientific method and its fruits
without buying into
arnoud:
a On Sunday 14 September 2003 16:31, Cliff Stabbert wrote:
aca I don't quite follow this train of thought. Physicalism to me is
aca justified by the success of the natural sciences with their
aca materialist ontology of atoms, electrons, molecules and so on.
ac
ac It is entirely possible
25 matches
Mail list logo