Below is data for projecting Moore's law, adapted from Hans Moravec's data
published at:
http://www.transhumanist.com/volume1/moravec.htm
I began the analysis at 1985 and deleted computers not based upon
single commodity CPU chips in order to focus the forecast.
Without further debate, assume
In a previous post Eliezer referenced a good critique of Moore's Law:
http://firstmonday.org/issues/issue7_11/tuomi/index.html
Assuming the facts presented in that paper, I agree with the
conclusions that Moore's Law was never a valid law. But I have researched
Moore's Law references on the Web
On Tue, 14 Jan 2003, Pei Wang wrote:
I'm working on a paper to compare predicate logic and term logic. One
argument I want to make is that it is hard to infer on uncountable nouns in
predicate logic, such as to derive ``Rain-drop is a kind of liquid'' from
Water is a kind of liquid'' and
On Wed, 12 Feb 2003, Brad Wyble wrote:
I can't imagine the military would be interested in AGI, by its very
definition. The military would want specialized AI's, constructed
around a specific purpose and under their strict control. An AGI goes
against everything the military wants from its
Daniel,
For a start look at the IPTO web page and links from:
http://www.darpa.mil/ipto/research/index.html
Darpa has a variety of Offices which sponsor AI related work, but IPTO is
now being run by Ron Brachman, the former president of the AAAI. When I
listened to the talk he gave a Cycorp in
On Mon, 17 Feb 2003, Mike Deering wrote:
Based on available data how are we to calculate the doubling time extrapolation into
the future? On 1/6/2003 Stephen Reed writes. Progressing from -50 db HEC to 0 db
HEC in 22 years is equivalent to Moore's Law doubling every 16 months. [ 2^16.61
On Mon, 17 Feb 2003, Mike Deering wrote:
It is obvious that no one on this list agrees with me. This does not mean that I am
obviously wrong. The division is very simple.
My position: the doubling time has been reducing and will continue to do so.
Ray Kurzweil agrees with you and has data
On Mon, 17 Feb 2003, Brad Wyble wrote:
Also, integrating the power of multiple units is another hard problem. I don't
recall the figure, but the vast majority of the brain is interconnective tissue.
Networking hardware scales nonlinearly with the number of processing units. Even if
you
I would like to contribute new SPEC CINT 2000 results as they are posted
to the SPEC benchmark list by semiconductor manufacturers. I expect
to post perhaps 10 times per year with this news. This is the source data
for my Human Equivalent Computing spreadsheet and regression line.
If Kurzweil
Regarding the Google voice search technology...
Maybe no IP/phone number mapping whatsoever as the web page says that the
system has very limited capacity. So the search results page could be
shared among all users.
I imagine that the Google service is aimed at cell phone users and that
the (to
On Wed, 26 Feb 2003, Ben Goertzel wrote:
Cyc seems moderately strong on declarative knowledge (though I think it
misses most of the fine-grained declarative knowledge that helps us cope
with the real world... it focuses on relatively abstract levels of
declarative knowledge...)
Agreed on the
AMD just posted the SPEC CPU2000 benchmark scores for the Athlon MP 2600+
chip. This is not their fastest chip.
CINT2000 Peak: 781
CFP2000 Peak: 650
Peak means that the compiler is free to use maximum optimization settings
for the benchmark. Using Moravec/Kurzweil assumptions, 1 CINT2000 =
www.specbench.org has received this benchmark from AMD:
AMT Athlon XP 3000+ CINT2000 Peak 995
which equates to 4432 MIPS (Moravec/Kurzweil). Very slightly below my 20
year log regression line.
-Steve
--
===
Stephen L. Reed
The Cyc Knowledge Base was one of the three technologies that participated
in Halo Phase I. We extended our deductive inference engine to cover the
types of questions found in Advanced Placement (High School) chemistry
examinations, and formatted the (NL Generated) justifications for the
I participated in a AAAI-02 workshop and I would be interested in
participating in one for AAAI-04. The bar for research paper submission
is much lower - basically the organizers review the submissions.
-Steve
On Mon, 15 Sep 2003, Pei Wang wrote:
I wonder if we have enough people
The published hardware description of the Cell SPUs: 128 bit vector
engines, 128 registers each, matches the published Freescale AltiVec
processor architecture. I've looked over the programmer's documentation
for that processor and believe that vector processing is of limited
usefulness for
On Wed, 9 Feb 2005, Eugen Leitl wrote:
What I don't like about Cell is lack of 8 bit and 16 bit integer data types
in SPU SIMD. I'm also missing discussion on whether the SPUs are connected by
a crossbar (there might be no need for it, if the internal bus is really fast
and wide), and which
On Wed, 9 Feb 2005, Stephen Reed wrote:
On Wed, 9 Feb 2005, Eugen Leitl wrote:
What I don't like about Cell is lack of 8 bit and 16 bit integer data types
in SPU SIMD. I'm also missing discussion on whether the SPUs are connected
by
a crossbar (there might be no need
On Mon, 14 Mar 2005, Dennis Gorelik wrote:
From my point of view CYC in on the same level of intelligence as
MS Word. Well, probably MS Word is even more intelligent.
At least MS Word works and produce nice and intelligent results (not
super-intelligent though).
Does CYC have any practical
Cycorp is receiving mostly steady funding from government research
agencies, as contracts come and go. Staff members recently attended a
conference on acquiring knowledge from volunteer contributors. Regarding
your point, the Cycorp approach is that the bootstrapping occurs at the
edges of
--- Russell Wallace [EMAIL PROTECTED] wrote:
A serious AGI will have to end up making Google look
like those '10 PRINT
HELLO: GOTO 10' programs we used to write on our
childhood 8-bit
computers.
Agreed.
If everyone just downloads their own copy
and tweaks it
separately from everyone
--- Russell Wallace [EMAIL PROTECTED] wrote:
On 8/28/06, Bill Hibbard [EMAIL PROTECTED] wrote:
By open source distribution you are expressing
optimism
about human nature, and your developer community
will
mostly justify that optimism. The best approach
for the
few who disappoint
--- Charles D Hixson [EMAIL PROTECTED]
wrote:
Stephen Reed wrote:
I would appreciate comments regarding additional
constraints, if any, that should be applied to a
traditional open source license to achieve a free
but
safe widespread distribution of software that may
lead
to AGI
--- Philip Goetz [EMAIL PROTECTED] wrote:
Wilbur Peng I developed a set of standards for
AGI components, called MAGIC, that was intended to
form the foundation of an open-source AGI effort.
[snip]
- the programs will be written in a modular fashion
- the programs will be agent-based,
Thanks YKY for your response!
--- YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
I support opensource AGI with the following reasons:
1. It would be nearly impossible to enforce the
single-AGI scenario; I
think the best strategy is to start a project and
try our best in it.
Regardless, I
--- Charles D Hixson [EMAIL PROTECTED]
wrote:
Stephen Reed wrote:
...
Rather than cash payments I have in mind a scheme
similar to the pre-world wide web bulletin board
system in which FTP sites had upload and download
ratios. If you wished to benefit from the site by
downloading
--- Philip Goetz [EMAIL PROTECTED] wrote:
On 9/1/06, Stephen Reed [EMAIL PROTECTED]
wrote:
Rather than cash payments I have in mind a scheme
similar to the pre-world wide web bulletin board
system in which FTP sites had upload and download
ratios. If you wished to benefit from
I would add that previous more-or-less general AI
projects have not greatly exceeded their modest
expectations. So given this experience perhaps there
is a tendency among potential sponsors to classify new
AGI projects as crackpot schemes.
-Steve
--- Pei Wang [EMAIL PROTECTED] wrote:
Good
Hi Andi,
I too have thought about using the human/computer
interface as sensors and actuators for an AI. This
would accomplish a few things: 1. Grounding of
symbols in a real world that is mostly symbolic and
precise to begin with. 2. Behaviors and actions could
be developed that are useful
This is a better link from the company that I found by
Googling nanosyntax:
http://nanosyntax.com/
The basic idea is that word senses are not atomic but
are composed of something more primitive whose
sentence-distributed structure is called nanosyntax.
As I am about to write a parser using a
I worked at Cycorp when the FACTory game was developed. The examples below do
not reveal Cyc's knowledge of the assertions connecting these disparate
concepts, rather most show that the argument constraints of the terms compared
are rather overly generalized. The exception is the example Most
I've been using OpenCyc as the standard ontology for my texai project. OpenCyc
contains only the very few rules needed to enable the OpenCyc deductive
inference engine operate on its OpenCyc content. On the other hand
ResearchCyc, whose licenses are available without fees for research
Given my experience while employed at Cycorp, I would say that there are two
ways to work with them. The first way is to collaborate with Cycorp on a
sponsored project. Collaborators are mainly universities (e.g. CMU Stanford)
and established research companies (e.g. SRI SAIC) who have a
, but wasn't the original Mindpixel based
fundamentally upon probabilistic representations (coherence values) whereas
Cyc, from what I understand, doesn't represent facts or rules
probabilistically.
- Bob
On 23/01/07, Stephen Reed [EMAIL PROTECTED] wrote: Given my experience while
employed
When I was at Cycorp, we used Allegro for program development and a Lisp-to-C
translator and runtime of Cycorp's design for production. When containing
millions of knowledge store objects, Allegro is less space efficient than the
Cycorp C runtime. For example, Allegro uses two fullwords to
form? Or can post some good
screens of it?
James Ratcliff
Stephen Reed [EMAIL PROTECTED] wrote:
For my own AI research I am using Java. Apart from its satisfactory speed, I
like the NetBeans IDE, and most importantly like all the third-party software
libraries that I can plug in. Because my
/07, Stephen Reed [EMAIL PROTECTED] wrote:
Hi James,
My development source code is stored in the subversion repository at
SourceForge: http://sf.net/projects/texai . There is no GUI presently
because I am concentrating on the server-side functions and want text chat
as the system's primary
I've published a roughly categorized link list of Java AI tools and libraries,
that may be helpful to Java developers here:
http://texai.org/blog/software-links
Are there useful Java components that are missing?
Thanks!
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
Ed,
I've experimented with various Java parallel processing frameworks, most
recently the fork-join framework (JSR166) that will be included in Java 7. The
Sphinx-4 automatic speech recognition that I use employs a mulit-threaded
phoneme scorer in order achieve better performance. For the
Hi YKY,
I hope that by this time next year the Texai project will have a robust English
parser suitable for your project. I am working in collaboration with the Air
Force Research Laboratory's Synthetic Teammate Project
- Original Message
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, December 20, 2007 9:41:10 PM
Subject: Re: [agi] NL interface
On 12/21/07, Stephen Reed [EMAIL PROTECTED] wrote:
Hi YKY,
I hope that by this time next year the Texai project
Thanks Bruno,
I will include a link for the OpenAir Java implementation in my link list at:
http://texai.org/blog/software-links/
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
-
Pei,
You can host your open source project at SourceForge immediately without
building a project team. While at Cycorp, I created and ran the OpenCyc
SourceForge project using only Cycorp contributors. There is very little
effort to create a SourceForge project, especially because you can
On the SourceForge project site, I just released the Java library for
Incremental Fluid Construction Grammar.
Fluid Construction Grammar is a natural language parsing and generation system
developed by researchers at emergent-languages.org. The system features a
production rule mechanism for
Ben asked:
What is the semantics of
?on-situation-localized-14 rdf:type texai:On-SituationLocalized
On-SituationLocalized is a term I created for this use case, while postponing
its associated definitional assertions. What I have in mind is that
On-SituationLocalized is a specialization
- Original Message
From: Benjamin Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Wednesday, January 9, 2008 4:04:58 PM
Subject: Re: [agi] Incremental Fluid Construction Grammar released
And how would a young child or foreigner interpret on the Washington
Monument or
A typo in my previous post:
...
Therefore, from the viewpoint of CxG, your example variations of the
on construction have their own associated semantics, and are
*NOT* necessarily covered by the rules that I developed for my sense of
on.
...
-Steve
Stephen L. Reed
Artificial Intelligence
released
On Jan 10, 2008 9:59 AM, Stephen Reed wrote:
and that the system is to learn constructions for your examples.
The
below
dialog is Controlled English, in which the system understands
and
generates
constrained syntax and vocabulary.
[user] The elements of a shit-list can
Will,
Affixes are morphological constructions and my system could have rules to
handle them. I plan eventually to include such rules for combinations that are
new. However the Texai lexicon will explicitly represent all common word forms
and multi-word phrases that would otherwise be covered
Granted that from a logical viewpoint, using a controlled English syntax to
acquire rules is as much work as explicitly encoding the rules. However, a
suitable, engaging, bootstrap dialog system may permit a multitude of
non-expert users to add the rules, thus dramatically reducing the amount
PM
Subject: Re: [agi] Incremental Fluid Construction Grammar released
Do you plan to pay these non-experts, or recruit them as volunteers?
ben
On Jan 10, 2008 1:11 PM, Stephen Reed [EMAIL PROTECTED] wrote:
Granted that from a logical viewpoint, using a controlled English syntax to
acquire
10:57 AM, Stephen Reed [EMAIL PROTECTED] wrote:
If I understand your question correctly it asks whether a non-expert
user can be guided to use Controlled English in a dialog system. In
This is an idea that I wanted to try at Cycorp but Doug Lenat
said that it had been tried before and failed
I am very interested in parsing the constructions used in WordNet and
Wiktionary glosses (i.e. definitions). Here are some samples from WordNet
online http://wordnet.princeton.edu/perl/webwn . The glosses are
parenthesized, and examples are in italics for those of you with rich text
email
Matt,
I agree with Ben. Tomassello's book Constructing a Language, A Usage-Based
Theory of Language Acquisition argues that young children develop the skill to
discern the intentional actions of others. Construction Grammar (CxG) is a
simple pairing of form and meaning. According to this
Hi Pei,
I looked the source code a bit. Do you think that a NARS statement can be
represented as an RDF triple? Furthermore can a NARS term have a URI to be RDF
compatible? If so, then that would facilitate statement and term exchange
between our systems.
Nice photo of you on the mountain!
and http://nars.wang.googlepages.com/NARS-Examples-MultiSteps.txt .
I look forward to meeting you in Memphis.
Regards,
Pei
On Jan 18, 2008 3:21 AM, Stephen Reed [EMAIL PROTECTED] wrote:
Hi Pei,
I looked the source code a bit. Do you think that a NARS statement can be
represented as an RDF triple
I've posted a brief design document for the Texai bootstrap dialog system on my
blog.
http://texai.org/blog/2008/01/20/bootstrap-dialog-system-design
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
The article on the fate of the two AI researchers was interesting. Perhaps
many here share their belief that AGI will vastly change the world. It is
however unfortunate that they did not seek medical help for their symptoms of
depression - no one needs to suffer that kind of pain. They were
Hi James,
Your web site is informative. I very much seek comments, input and
collaboration with the AI Lab at the University of Texas. I see that your
interest is knowledge based systems. I worked indirectly with Dr. Porter
during my tenure at Cycorp as its first manager for the DARPA Rapid
Mike,
This is an interesting comment. As perhaps you know, in my own work I am using
an Albus Hierarchical Control System, in which the higher levels maintain a
world model. Rodney Brooks argued some years ago that such control hierarchies
did not need to model abstractly, that the world
I have been collaborating with this lab on their Fluid Construction Grammar
system, as described briefly in this blog post:
http://texai.org/blog/2007/10/24/fluid-construction-grammar
I downloaded their Common Lisp implementation and rewrote it in Java and
demonstrated that I could achieve the
Hi Evgenii,
From my bookshelf:
1. Code Generation in Action (2003) - Jack Herrington
2. Computer Program Construction (1994) - Ali Mili, Jules Desharnais, Fatma
Mili
3. Knowledge Based Program Construction (1979) - David R. Barstow
4. Studies in Automatic Programming Logic (1977) - Zohar
on automatic programming?
Stephen Reed wrote:
Eli,
Same as Ben - Generative Programming, Methods, Tools, and
Applications
(2000) -
Krzysztof Czarnecki, Ulrich W. Eisenecker
I would chime in and say that this one also struck me as a very
stimulating book
Richard,
I entirely agree with your comments. I would like to eventually stop
programming in Java and have the system do that for me. I am strongly
motivated to build its dialog component first because that addresses the issue
of how to collaborate with the system when the rough seas are
on automatic programming?
Stephen Reed wrote:
Richard,
I entirely agree with your comments. I would like to eventually
stop
programming in Java and have the system do that for me. I am
strongly
motivated to build its dialog component first because
Mike,
Cyc uses, and my own Texai project will also eventually employ, deductive
reasoning (i.e. modus ponens) as its main inference mechanism. In Cyc, most of
the fallacies that Shirkey points out are avoided by two means - nonmonotonic
(e.g. default) reasoning, and context.
Although I
Pei,
Given your description, I agree B2 is the way to go. At Cycorp, the inductive
(e.g. rule induction), abductive (e.g. hypothesis generation), and analogical
reasoning engines I observed were all supported by deductive inference. I also
a member of a Cycorp team that collaborated with
David said:
Most of the people on this list have quite different ideas about
how an AGI
should be made BUT I think there are a few things that most, if
not all
agree on.
1. Intelligence can be created by using computers that exist today
using
software.
Yes, I would be very glad to incorporate any content that I can then republish
using a Wikipedia-compatible license, e.g. GNU Free Documentation License. Any
weaker license, such as Apache, BSD would be OK too.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
Briefly, I think that Cyc indeed has solved the brittleness problem observed
with 1980's style narrow-domain expert systems. During the Halo project, Cyc
was merely extended in a principled fashion to answer a battery of word
questions in the chemistry domain. In my opinion the chief drawback
Pei: Resolution-based FOL on a huge KB is intractable.
Agreed.
However Cycorp spend a great deal of programming effort (i.e. many man-years)
finding deep inference paths for common queries. The strategies were:
prune the rule set according to the contextsubstitute procedural code for
, and the
result is equivalent to the original knowledge in truth-value only.
It is hard to control the direction of the inference without semantic
information.
Pei
On Feb 18, 2008 11:13 AM, Stephen Reed [EMAIL PROTECTED] wrote:
Pei: Resolution-based FOL on a huge KB
According to the in-house Cycorp jargon, deep inference begins at approximately
four backchain steps in a deductive inference. As most here know, there is an
exponential fanout in the number of separate inference paths with each
backchain step, given a large candidate rule set and a large set
Hi Mark,
I value your ideas about 'Friendliness as an attractor in state space'. Please
keep it up.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From:
I agree with Mark.
The reason the readers of this forum should seek to control AGI development is
to ensure friendly behavior, rather than leaving this responsibility to an Evil
Company or to some military organization.
With human labor removed as a constraint on our system's economic
While programming my bootstrap English dialog system, I needed a spreading
activation library for the purpose of enriching the discourse context with
conceptually related terms. For example given that there is a human-habitable
room that both speakers know of, then it is reasonable to assume
Ben,
Wikipedia has significant overlap with the topic list on the AGIRI Wiki. I
propose for discussion the notion that the AGIRI Wiki be content-compatible
with Wikipedia along two dimensions:
license - authors agree to the GNU Free Documentation Licenseeditorial
standards - Wikipedia says
Ben,
I just created an account on the wiki and created my user page derived from my
Wikipedia user page. Image uploads on the wiki work the same way as on
Wikipedia - Yay.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Thanks Ben for leaving a placeholder for Fluid Construction Grammar. I've
copied over the Wikipedia article for which I wrote most of the content.
Cheers.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA
I've added some content in the Computational Linguistics section of the AGIRI
Wiki, which Ben outlined:
Fluid_Construction_Grammar adapted from the Wikipedia article that I mostly
authored. Link Grammar adapted from Wikipedia. Language Generation adapted
from Wikipedia. Word Grammar
Mike,
An interesting paper on the meanings of words is I don't believe in word
senses by Adam Kilgarriff. He concludes:
Following a description of the conflict between WSD [Word Sense Disambiguation]
and lexicological research, I examined the concept, ‘word sense’. It was not
found to be
Ben,
I would agree with an even stronger version of your statement: Treating word
senses as fuzzy, cluster type categories in the context of usage-instances is
the only cognitively plausible method for AGI to comprehend and produce them.
-Steve
Stephen L. Reed
Artificial Intelligence
- Original Message
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 5:30:12 PM
Subject: Re: [agi] Microsoft Launches Singularity
DIV {
MARGIN:0px;}
Steve,
Some odd thoughts in reply. Thanks BTW for
article.
1. You don't seem to get what's
Mike,
I have Lakoff Johnson Metaphors We Live By. And I'll order the other
titles you recommend.
-Steve
Stephen L. Reed
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
- Original Message
From: Mike
Derek: How could a symbolic engine ever reason about the real world *with*
access to such information?
I hope my work eventually demonstrates a solution to your satisfaction. In the
meantime there is evidence from robotics, specifically driverless cars, that
real world sensor input can be
Hi Jim,
According to the Wikipedia article on SAT Solvers, there are extensions for
quantified formulas, and first order logic. Otherwise SAT solvers operate
principally on sets of symbolic propositions. Agreed?
I believe that SAT solvers are not cognitively plausible. More precisely, I
I've posted the status of the Texai bootstrap English dialog system on my blog.
Summary: parsing works for a single use case sentence, and I'm moving on to
generation.
I also created a page describing my approach to English utterance comprehension
here, that integrates the incremental version
of
natural language? (And NL-semantics' impact on logical semantics, as
opposed to letting the computer build the representation for itself,
out of some elementary thought mechanics.)
P.S. Thanks to Pei Wang for the interesting curriculum and to Stephen
Reed for the great work on Texai
Lukasz, I am very pleased with my implementation of the few Double R Grammar
rules required to incrementally parse the book is on the table, which is an
example sentence from Jerry Ball's paper. Dr. Ball is a proponent of
cognitively plausible NLP architectures.
-Steve
Stephen L. Reed
MW/MT:... how do you test acquired knowledge?
I have given this problem some thought, regarding the testing of acquired
grammar facts, rules and skills. Here are some points, mostly from my
experience with Cyc.
Before the knowledge is acquired, the mentor (or ultimately the system
Everyone knows that perception is the result of a combination of pickup
(bottom-up processing) and expectation (top-down processing). There are many,
many ways to implement this idea.
Richard,
Thanks for describing perception, in the same fashion that I believe is
explained by James Albus
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, April 10, 2008 11:20:13 AM
Subject: Re: [agi] How Bodies of Knowledge Grow
... I agree that Albus is interesting. I am superficially familiar
with his approach.
From my point of view I
- Original Message
From: Steve Richfield [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, April 10, 2008 2:58:09 PM
Subject: [agi] Comments from a lurker...
[snip] BTW, the principles behind Dr. Eliza are rather unique. I'd be glad to
send some papers to anyone who is
Hi Richard,
After reading your blog post I wonder if you think either that (1) a
hierarchical control system, such as proposed by James Albus and adopted by me
as the Texai cognitive architecture, is doomed to failure as an AGI due to
complexity, or whether that (2) a hierarchical control
512.791.7860
- Original Message
From: Stephen Reed [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, April 11, 2008 2:43:26 PM
Subject: Re: [agi] Blog essay on the complex systems problem
Hi Richard,
After reading your blog post I wonder if you think either that (1) a
hierarchical
- Original Message
From: Richard Loosemore [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, April 11, 2008 3:06:21 PM
Subject: Re: [agi] Blog essay on the complex systems problem
Richard Loosemore wrote:
[snip]
I would not say that your hierarchical control structure is doomed to
Richard Loosemore wrote:
[snip] but this is not an open source project at this stage.
Richard,
I am sad that your AGI project is not open source at this stage. Please
consider developing *something* open source that is related to your mission, at
least analogous to OpenCog -- Novamente.
Mark wrote:
I wonder if you could clarify why you insistupon GPL as opposed to a
Berkeley-type or Apache-type license? I believevery strongly that both
ends of the open source to commercial *spectrum*are entirely unreasonable
and that there is a reasonable middle ground wherewe
:03 AM, Stephen Reed [EMAIL PROTECTED] wrote:
I would be interested
in your comments on my adoption of Fluid Construction Grammar as a solution
to the NL to semantics mapping problem.
(1) Word Grammar (WG) is a construction-free version of your approach.
It is based solely on spreading
Publishing computer-generated books on demand, aggregating many small profits,
is an interesting illustration of The Long Tail.
Considering an AGI, I anticipate that knowledge and skill acquisition will be
facilitated by this principle. Obscure knowledge and skills can be acquired
from, and
1 - 100 of 162 matches
Mail list logo