Hi,
I am looking for technical papers and/or code for a simple form
of linguistic pattern recognition, specifically, that for finite
automata.
Its well known that a regular language (a type of formal
language) is in 1-1 correpsondance with a finite state machine
(each finie state machine can
On Thu, Sep 27, 2007 at 05:57:49PM -0700, Matt Mahoney wrote:
Only as an upper bound.
Lower bound. The earliest AGI implementations are likely to be highly
inefficient. Faster algo's will be found only later, over time, as the
actual problem is understood better.
--linas
-
This list is
On Mon, Oct 01, 2007 at 10:47:36AM -0700, Don Detrich wrote:
[...]
apply to the personality of an AGI with no need for food, no pain, no
hunger, no higher level behavior related to pecking order.
It will presumably be hungry for compute cycles and ergo, electricity.
Ergo, it may want to make
On Mon, Oct 01, 2007 at 12:48:00PM -0700, Matt Mahoney wrote:
The problem is that an intelligent RSI worm might be millions of
times faster than a human once it starts replicating.
Yes, but the proposed means of finding it, i.e. via evolution
and random mutation, is hopelessly time consuming.
On Sun, Sep 30, 2007 at 12:49:43PM -0700, Morris F. Johnson wrote:
Integration of sociopolitical factors into a global evolution predictive
model will require something the best
economists, scientists, military strategists will have to get right or risk
global social anarchy.
FYI, there was
On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote:
[...]
RSI (Recursive Self Improvement)
[...]
I didn't know exactly what the term covers.
So could you, or someone, please define exactly what its meaning is?
Is it any system capable of learning how to improve its current
On Wed, Oct 03, 2007 at 02:09:05PM -0400, Richard Loosemore wrote:
RSI is only what happens after you get an AGI up to the human level: it
could then be used [sic] to build a more intelligent version of itself,
and so on up to some unknown plateau. That plateau is often referred to
as
On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote:
From what you say below it would appear human-level AGI would not require
recursive self improvement,
[...]
A lot of people on this list seem to hang a lot on RSI, as they use it,
implying it is necessary for human-level AGI.
On Wed, Oct 03, 2007 at 06:31:35PM -0400, Edward W. Porter wrote:
One of them once told me that in Japan it was common for high school boys
who were interested in math, science, or business to go to abacus classes
after school or on weekends. He said once they fully mastered using
physical
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
When the first AGI is built, its first actions will be to make sure that
nobody is trying to build a dangerous, unfriendly AGI.
Yes, OK, granted, self-preservation is a reasonable character trait.
After that
point, the
On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote:
Second, You mention the 3-body problem in Newtonian mechanics. Although
I did not use it as such in the paper, this is my poster child of a
partial complex system. I often cite the case of planetary system
dynamics as an
On Tue, Oct 02, 2007 at 03:03:35PM -0400, Mark Waser wrote:
Do you really think you can show an example of a true moral universal?
Thou shalt not destroy the universe.
Thou shalt not kill every living and/or sentient being including yourself.
Thou shalt not kill every living and/or sentient
On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
I like to think of myself as peaceful and non-violent,
On Wed, Oct 03, 2007 at 08:39:18PM -0400, Edward W. Porter wrote:
the
IQ bell curve is not going down. The evidence is its going up.
So that's why us old folks 'r gettin' stupider as compared to
them's young'uns.
--linas
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To
OK, this is very off-topic. Sorry.
On Fri, Oct 05, 2007 at 06:36:34PM -0400, a wrote:
Linas Vepstas wrote:
For the most part, modern western culture espouses and hews to
physical non-violence. However, modern right-leaning pure capitalism
advocates not only social Darwinism, but also
On Thu, Oct 04, 2007 at 11:06:11AM -0400, Richard Loosemore wrote:
In case anyone else wonders about the same question, I will explain why
the Turing machine equivalence has no relevance at all.
Re-read what you wrote, substituting the phrase Turing machine, for
each and every occurrance of
On Fri, Oct 05, 2007 at 01:39:51PM -0400, J Storrs Hall, PhD wrote:
On Friday 05 October 2007 12:13:32 pm, Richard Loosemore wrote:
Try walking into any physics department in the world and saying Is it
okay if most theories are so complicated that they dwarf the size and
complexity of
On Sun, Oct 07, 2007 at 02:17:30PM -0400, J Storrs Hall, PhD wrote:
This is the same kind of reasoning that leads Bostrom et al to believe that
we
are probably living in a simulation, which may be turned off at any ti
On Sat, Oct 06, 2007 at 10:05:28AM -0400, a wrote:
I am skeptical that economies follow the self-organized criticality
behavior.
Oh. Well, I thought this was a basic principle, commonly cited in
microeconomics textbooks: when there's a demand, producers rush
to fill the demand. When there's
On Sun, Oct 07, 2007 at 12:36:10PM -0700, Charles D Hixson wrote:
Edward W. Porter wrote:
Fred is a human
Fred is an animal
You REALLY can't do good reasoning using formal logic in natural
language...at least in English. That's why the
On Wed, Oct 10, 2007 at 01:06:35PM -0700, Charles D Hixson wrote:
For me the sticking point was that we were informed that we didn't know
anything about anything outside of the framework presented. We didn't
know what a Fred was, or what a human was, or what an animal was.
?? Well, no. In
On Wed, Oct 10, 2007 at 01:22:26PM -0400, Richard Loosemore wrote:
Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?
Yes, and I'm sorry I triggred the thread.
I particularly object to libertarianism being shoved down our
Let's take Novamente as an example. ... It cannot improve itself
until the following things happen:
1) It acquires the knowledge and skills to become a competent
programmer, a task that takes a human many years of directed
training and practical experience.
Wrong. This was hashed to
On Fri, Oct 12, 2007 at 05:16:04PM +0100, Mike Tintner wrote:
How is maths grounded?
Wow.
Many algebraic systems ar grounded in a set of axioms, which are assumed
to be true.
Our decimal number system is obviously based on the basic numbers 1 - 10 -
which are countable by hand. Digital.
Visualspatial intelligence is required for almost anything.
I'm sorry. This is all pure, unadulterated BS.
Agreed.
autistic savants also have trouble describing their process
when they do math.
My personal theory, which you do not have to accept, is that
Ramanujan was able to train a
My apologies, I amuse myself too easily...
On Fri, Oct 12, 2007 at 04:00:25PM -0500, Linas Vepstas wrote:
not grounded on ZFC, mostly
cause its not constructivist. Non-concrete categories are, well,
roughly speaking bigger than the biggest infinities, and so ZFC doesn't
really address
On Sat, Oct 13, 2007 at 03:28:51PM +0100, Mike Tintner wrote:
I felt sad - is a grounded statement - grounded in your internal
kinaesthetic experience of your emotions.
OK..
Would you like to rephrase your question in the light of this - the common
sense nature of grounding, which I
On Sat, Oct 13, 2007 at 03:35:07PM +0200, Lukasz Kaiser wrote:
it has nothing to do with grounding as discussed here.
OK, clearly, I missed something. What, then, was meant by grounding?
I think that people normally use much more concrete models in their
heads when working and only later
On Wed, Oct 17, 2007 at 10:48:31PM +0200, David Orban wrote:
During the Summit there was a stunning prediction, if I am not mistaken by
Peter Thiel, who said that the leading corporations on the planet will be
run by their MIS and ERP systems. There is no need for a qualitative change
for
On Thu, Oct 18, 2007 at 12:51:19AM +0200, David Orban wrote:
Your examples are also very good. Should we then assume, that since it
is already the case that major industry segments and corporations are
run by software, and nobody seems to mind, that it will stay like
that?
Good question. Its
On Wed, Oct 17, 2007 at 10:25:18AM -0400, Richard Loosemore wrote:
One way this group have tried to pursue their agenda is through an idea
due to Montague and others, in which meanings of terms are related to
something called possible worlds. They imagine infinite numbers of
possible
Hi,
Aside from Novamente and CYC, who else has attempted to staple
NLP to a reasoning engine? I just pasted a good NLP parser
I found on the net, onto a home-brew, cut-rate reimplementation
of the CYC reasoning engine. I've got simple things working
(answers what is X? questions, and remembers
On Wed, Oct 31, 2007 at 05:53:48PM -0700, Matt Mahoney wrote:
--- Linas Vepstas [EMAIL PROTECTED] wrote:
Aside from Novamente and CYC, who else has attempted to staple
NLP to a reasoning engine?
Many have tried, such as BASEBALL in 1961 [1] and SHRDLU in 1968-70 [2]. But
Thanks, read
On Thu, Nov 01, 2007 at 04:35:34PM -0400, Edward W. Porter wrote:
If the nano-electronics revolution delivers on its promise, in fifteen to
twenty-five years most of us should be able to afford and wear (or have
implanted) personal AGI's that can substrantially record all of our lives.
Once
On Thu, Nov 01, 2007 at 02:50:12AM -0400, Jiri Jelinek wrote:
Considering
a) how important AGI is
b) how many dev teams seriously work on AGI
How many are there? A dozen? Maybe 100 people total? Less?
c) how many investors are willing to spend good money on AGI RD
How many? VC's I talked
On Thu, Nov 01, 2007 at 06:58:14PM -0400, Pei Wang wrote:
On 11/1/07, Linas Vepstas [EMAIL PROTECTED] wrote:
More importantly, I've started struggling with representing
conversational state. i.e. what are we talking about? what
has been said so far? I've got some inkling on how to expand
On Fri, Nov 02, 2007 at 12:06:05PM -0400, Jiri Jelinek wrote:
On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Natural language is a fundamental part of the knowledge
base, not something you can add on later.
I disagree. You can start with a KB that contains concepts retrieved
On Fri, Nov 02, 2007 at 12:41:16PM -0400, Jiri Jelinek wrote:
On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
if you could have anything you wanted, is this the end you
would wish for yourself, more than anything else?
Yes. But don't forget I would also have AGI
On Fri, Nov 02, 2007 at 11:27:08AM +0300, Vladimir Nesov wrote:
Linas,
Yes, you probably can code all the patterns you need. But it's only
the tip of the iceberg: problem is that for those 1M rules there are
also thousands that are being constantly generated, assessed and
discarded.
On Fri, Nov 02, 2007 at 01:19:19AM -0400, Jiri Jelinek wrote:
Or do we know anything better?
I sure do. But ask me again, when I'm smarter, and have had more time to
think about the question.
--linas
-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change
On Fri, Nov 02, 2007 at 09:01:42AM -0700, Charles D Hixson wrote:
To me this point seems only partially valid. 1M hand coded rules seems
excessive, but there should be some number (100? 1000?) of hand-coded
rules (not unchangeable!) that it can start from. An absolute minimum
would seem
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
But learning problem isn't changed by it. And if you solve the
learning problem, you don't need any scaffolding.
But you won't know how to solve the learning problem until you try.
--linas
-
This list is sponsored by AGIRI:
On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
But learning problem isn't changed by it. And if you solve the
learning problem, you don't need any
On Fri, Nov 02, 2007 at 12:56:14PM -0700, Matt Mahoney wrote:
--- Jiri Jelinek [EMAIL PROTECTED] wrote:
On Oct 31, 2007 8:53 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
Natural language is a fundamental part of the knowledge
base, not something you can add on later.
I disagree. You can
On Sat, Nov 03, 2007 at 12:06:48AM +0300, Vladimir Nesov wrote:
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
On 11/2/07, Linas Vepstas [EMAIL PROTECTED] wrote:
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov
On Sat, Nov 03, 2007 at 12:15:29AM +0300, Vladimir Nesov wrote:
I personally don't see how this appearance-building is going to help,
so the question for me is not 'why can't it succeed?', but 'why do it
at all?'.
Because absolutely no one has proposed anything better?
--linas
-
This
On Sat, Nov 03, 2007 at 01:17:03PM -0400, Richard Loosemore wrote:
Isn't there a fundamental contradiction in the idea of something that
can be a tool and also be intelligent? What I mean is, is the word
tool usable in this context?
In the 1960's, there was an expression you're just a
Hi,
On Sat, Nov 03, 2007 at 01:41:30AM -0400, Philip Goetz wrote:
Why don't you describe what you've done in more detail, e.g., what
parser you're using, and how you hooked it up to Cyc?
I randomly selected the link grammer parser
http://www.link.cs.cmu.edu/link/ for the parser, although
On Mon, Nov 05, 2007 at 11:11:41AM -0800, Matt Mahoney wrote:
--- Linas Vepstas [EMAIL PROTECTED] wrote:
I randomly selected the link grammer parser
http://www.link.cs.cmu.edu/link/ for the parser,
It still has a few bugs.
(S (NP I)
(VP ate pizza
(PP with
(NP
On Mon, Nov 05, 2007 at 03:17:13PM -0600, Linas Vepstas wrote:
On Mon, Nov 05, 2007 at 11:11:41AM -0800, Matt Mahoney wrote:
--- Linas Vepstas [EMAIL PROTECTED] wrote:
I randomly selected the link grammer parser
http://www.link.cs.cmu.edu/link/ for the parser,
It still has a few bugs
On Tue, Nov 06, 2007 at 01:55:43PM -0500, Monika Krishan wrote:
questions was the possibility that AGI might come full circle and attempt to
emulate human intelligence (HI) in the process of continually improving
itself.
Google The simulation argument, Nick Bostrom. There is a 1/3 chance
that
On Wed, Nov 07, 2007 at 08:38:40AM -0700, Derek Zahn wrote:
A large number of individuals on this list are architecting an AGI
solution (or part of one) in their spare time. I think that most of
those efforts do not have meaningful answers to many of the questions,
but rather intend to
On Fri, Nov 09, 2007 at 03:40:19PM +0100, Shane Legg wrote:
[...]
!
I haven't finished reading the thing, but I did notice some typos.
Page 8: defn 1.3.2 has a missing \mu; it should say
... has the additional property \mu(\Omega)...
and next sentence is also missing a \mu:
should say ...then
Robin Hanson wrote:
The fact that people are prone to take these estimate
questions as attitude surveys is all the more reason to seek concrete
arguments, rather than yet more attitudes.
What makes you think that concerete arguments can be found for
prognostication?
Yes, Boeing can
On Sat, Nov 10, 2007 at 06:52:28AM -0700, John G. Rose wrote:
Here is a stimulating read available online about emergent meta-systems and
Holonomics...ties a lot of things together, very rich reading.
http://www.scribd.com/doc/10456/Reflexive-Autopoietic-Dissipative-Speical-Sy
Hi Adam,
Thanks for the reply.
On Fri, Nov 09, 2007 at 09:48:53PM -0800, Adam Pease wrote:
Linas,
My take is that it is a fact that there are different ways of carving
up metaphysics that are not mutually compatible, but which are
individually adequate. It's precisely why SUMO and Cyc
On Sat, Nov 10, 2007 at 12:27:30AM -0500, Benjamin Goertzel wrote:
I'm more bullish on the creation of
knowledge-bases by mining natural language.
Yes, but early automobiles did not start themselves; they
had a hand crank to get them going. I'm looking at the
upper ontologies as a way to get
On Sat, Nov 10, 2007 at 10:19:44AM -0800, Jef Allbright wrote:
as I was driving home I approached a
truck off the side of the road, its driver pulling hard on a bar,
tightening the straps securing the load. Without conscious thought I
moved over in my lane to allow for the possibility that
On Sun, Nov 11, 2007 at 02:16:06PM -0500, Edward W. Porter wrote:
Its way out, but not crazy. If humanity or some mechanical legacy of us
ever comes out the other end of the first century after superhuman
intelligence arrives, it or they will be ready to start playing in the
Galactic big
On Mon, Nov 12, 2007 at 04:56:00PM -0500, Richard Loosemore wrote:
Linas Vepstas wrote:
I can easily imagine that next-years grand challenge, or the one
thereafter, will explicitly require ability to deal with cyclists,
motorcyclists, pedestrians, children and dogs. Exactly how they'd test
On Mon, Nov 12, 2007 at 01:49:52PM -0500, Mark Waser wrote:
What I thought you meant was, if a user asked I'm a small farmer in New
Zealand. Tell me about horses then the system would be able to disburse
its relevant knowledge about horses, filtering out the irrelevant stuff.
What
On Mon, Nov 12, 2007 at 06:56:51PM -0500, Mark Waser wrote:
It will happily include irrelevant facts
Which immediately makes it *not* relevant to my point.
Please read my e-mails more carefully before you hop on with ignorant
flames.
I read your emails, and, mixed in with some
On Mon, Nov 12, 2007 at 06:22:37PM -0600, Bryan Bishop wrote:
On Monday 12 November 2007 17:31, Linas Vepstas wrote:
If and when you find a human who is capable of having conversations
about horses with small farmers, rodeo riders, vets, children
and biomechanicians, I'll bet
On Mon, Nov 12, 2007 at 07:46:15PM -0500, Mark Waser wrote:
There is a big difference between being able to fake something for a
brief period of time and being able to do it correctly. All of your
phrasing clearly indicates that *you* believe that your systems can only
fake it for a
On Mon, Nov 12, 2007 at 08:44:58PM -0500, Mark Waser wrote:
So perhaps the AGI question is, what is the difference between
a know-it-all mechano-librarian, and a sentient being?
I wasn't assuming a mechano-librarian. I was assuming a human that could
(and might be trained to) do some
On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote:
Suppose that in some significant part of Novamente there is a
representation system that uses probability or likelihood numbers to
encode the strength of facts, as in [I like cats](p=0.75). The (p=0.75)
is supposed to
On 18/11/2007, Mike Tintner [EMAIL PROTECTED] wrote:
I might be getting confused - or rather, I am quite consciously bearing
that in mind. Let me just say then: I have not heard a *creative* new idea
here that directly addresses and shows the power to solve even in part the
problem of
On 20/11/2007, Benjamin Goertzel [EMAIL PROTECTED] wrote:
How much funding is massive varies from domain to domain. E.g. it's
hard to
do anything in nanotech without really expensive machinery. For AGI, $10M
is a lot of money, because the main cost is staff salaries, plus commodity
Hi,
On 20/11/2007, Dennis Gorelik [EMAIL PROTECTED] wrote:
Benjamin,
That's massive amount of work, but most AGI research and development
can be shared with narrow AI research and development.
There is plenty overlap btw AGI and narrow AI but not as much as you
suggest...
That's only
On 24/11/2007, Mike Tintner [EMAIL PROTECTED] wrote:
Linas,
I'm not asking for much more than brief exposition of ideas in this
forum, that just begin to show some promise. I'm not demanding or expecting
something fully worked through. The fact remains that I don't think I've
heard any in
On 27/02/2008, a [EMAIL PROTECTED] wrote:
This causes real controversy in this discussion list, which pressures me
to build my own AGI.
How about joining effort with one of the existing AGI projects?
--linas
---
agi
Archives:
On 07/03/2008, Matt Mahoney [EMAIL PROTECTED] wrote:
--- Mark Waser [EMAIL PROTECTED] wrote:
Attractor Theory of Friendliness
There exists a describable, reachable, stable attractor in state space
that
is sufficiently Friendly to reduce the risks of AGI to acceptable levels
On 11/03/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
An attractor is a set of states that are repeated given enough time. If
agents are killed and not replaced, you can't return to the current
state.
False. There are certainly attractors that disappear, first
seen by
On 10/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
Do you think that any of this contradicts what I've written thus far? I
don't immediately see any contradictions.
The discussions seem to entirely ignore the role of socialization
in human and animal friendliness. We are a large collection
On 14/02/2008, Mike Tintner [EMAIL PROTECTED] wrote:
Pei: Though many people assume reasoning can only been applied to
symbolic or linguistic materials, I'm not convinced yet, nor that
there is really a separate imaginative reasoning --- at least I
haven't seen a concrete proposal on
On 13/03/2008, Vladimir Nesov [EMAIL PROTECTED] wrote:
On Thu, Mar 13, 2008 at 8:35 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
A bit of vision processing fun:
http://www.friends.hosted.pl/redrim/Reading_Test.jpg
Interesting: is it possible to construct similar thing in audio form
On 13/03/2008, Bob Mottram [EMAIL PROTECTED] wrote:
Interesting. I assume that OCR programmers already know about this.
Traditional OCR tries to recognize one letter at a time, together
with guidance from a spell checker. For this example, the spell
checker would barf, so OCR might get all the
On 18/04/2008, Pei Wang [EMAIL PROTECTED] wrote:
I believe AGI is basically a theoretical problem, which will be solved
by a single person or a small group, with little funding.
I'm not sure I believe this. After working on this a bit, it has become
clear to me that there are more ideas than
On 20/04/2008, Derek Zahn [EMAIL PROTECTED] wrote:
William Pearson writes:
Consider an AI learning chess, it is told in plain english that...
I think the points you are striving for (assuming I understand what you
mean) are very important and interesting. Even the first simplest steps
2008/6/30 Terren Suydam [EMAIL PROTECTED]:
savant
I've always theorized that savants can do what they do because
they've been able to get direct access to, and train, a fairly
small number of neurons in their brain, to accomplish highly
specialized (and thus rather unusual) calculations.
I'm
2008/6/27 Stephen Reed [EMAIL PROTECTED]:
Hi Richard,
To re-capitulate, this list is not dead - some of its historical posters are
very busy.
That's the case for me. Hard to spend a lot of time arguing
about thin air, when one is busy actually trying to build something
that works.
--linas
2008/6/22 William Pearson [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences have the evidence for being simpler on
their side.
Familiar with Bostrom's simulation
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
Interesting: is it possible to train yourself to run a specially
designed nontrivial inference circuit based on low-base
transformations (e.g. binary)?
Why binary?
I once skimmed a biography of Ramanujan, he started
multiplying numbers in his head
2008/6/30 Vladimir Nesov [EMAIL PROTECTED]:
On Tue, Jul 1, 2008 at 8:31 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
Why binary?
I once skimmed a biography of Ramanujan, he started
multiplying numbers in his head as a pre-teen. I suspect
it was grindingly boring, but given the surroundings
2008/6/16 Abram Demski [EMAIL PROTECTED]:
I previously posted here claiming that the human mind (and therefore
an ideal AGI) entertains uncomputable models, counter to the
AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
I missed your earlier posts. However, I believe
2008/7/1 Vladimir Nesov [EMAIL PROTECTED]:
On Tue, Jul 1, 2008 at 10:02 AM, Linas Vepstas [EMAIL PROTECTED] wrote:
What are you trying to accomplish here? I don't see where
you are trying to go with this.
I don't think a human can consciously train one or two neurons
to do something, we
2008/7/2 Hector Zenil [EMAIL PROTECTED]:
Hypercomputational models basically pretend to take advantage from
either infinite time or infinite space (including models such as
infinite resources, Zeno machines or the Omega-rule, real computation,
etc.), from the continuum. Depending of the
Reposting, sorry if this is a dupe.
--linas
-- Forwarded message --
2008/6/22 William Pearson [EMAIL PROTECTED]:
Well since intelligence explosions haven't happened previously in our
light cone, it can't be a simple physical pattern, so I think
non-exploding intelligences
Ben has been saying that embodied experience is crucial for AGI,
and I've been reflexively noding my head in agreement. Now, that
the integration of virtual bodies with reasoning, knowledge bases,
and NLP processing is not that far off in the future -- a few years at
most -- I'm starting to
2008/9/17 JDLaw [EMAIL PROTECTED]:
IMHO to all,
There is an important morality discussion about how sentient life will
be treated that has not received its proper treatment in your
discussion groups. I have seen glimpses of this topic, but no real
action proposals. How would you feel if
Lets take the opencog list off this email, and move the
conversation to the agi list .
2008/9/17 [EMAIL PROTECTED]:
James,
I agree that the topic is worth careful consideration. Sacrificing the
'free as in freedom' aspect of AGPL-licensed OpenCog for reasons of
AGI safety and/or the
2008/9/18 David Hart [EMAIL PROTECTED]:
On Thu, Sep 18, 2008 at 3:26 PM, Linas Vepstas [EMAIL PROTECTED]
wrote:
I agree that the topic is worth careful consideration. Sacrificing the
'free as in freedom' aspect of AGPL-licensed OpenCog for reasons of
AGI safety and/or the prevention
2008/9/29 YKY (Yan King Yin) [EMAIL PROTECTED]:
I'm planning to make the project opensource, but I want to have a web
site that keeps a record of contributors' contributions. So that's
taking some extra time.
Most wiki's automatically keep tracl of who made
what changes, when.
*All* souce
2008/9/29 Stephen Reed [EMAIL PROTECTED]:
Ben gave the following examples that demonstrate the ambiguity of the
preposition with:
People eat food with forks
People eat food with friend[s]
People eat food with ketchup
[...]
how Texai would process Ben's examples. According to
2008/9/29 Ben Goertzel [EMAIL PROTECTED]:
Stephen,
Yes, I think your spreading-activation approach makes sense and has plenty
of potential.
Our approach in OpenCog is actually pretty similar, given that our
importance-updating dynamics can be viewed as a nonstandard sort of
spreading
FYI,
I've long grumbled about AGI being used to assess
political, social an moral issues. So I found the announce
below interesting.
--linas
-- Forwarded message --
From: Bei Yu [EMAIL PROTECTED]
Date: 2008/12/10
Subject: [Corpora-List] postdoc position at Northwestern
2009/1/10 Nathan Cook nathan.c...@gmail.com:
What about vibration? We have specialized mechanoreceptors to detect
vibration (actually vibration and pressure - presumably there's processing
to separate the two). It's vibration that lets us feel fine texture, via the
stick-slip friction between
I saw the following post from Antonio Alberti, on the linked-in
discussion group:
ALife and AGI
Dear group participants.
The relation among AGI and ALife greatly interests me. However, too few recent
works try to relate them. For exemple, many papers presented in AGI-09
On 17 October 2010 18:20, Ben Goertzel b...@goertzel.org wrote:
In other words, using formal grammar actually makes it harder to establish
the connection at the NL-logic interface. IE, it is harder to translate NL
sentences to formal grammar than to formal logic.
KY
Quite the opposite,
99 matches
Mail list logo