From: Brad Wyble [EMAIL PROTECTED]
Phillip wrote:
The significant acceleration of the mentation rate is only possible with
the introduction of 'Lamarkian' upgrading of the mentation systems eg.
the introduction of AGI technology either as new AGI entities or as
augmentation of the human
From: Brad Wyble [EMAIL PROTECTED]
1) AI is a tool and we're the user, or
2) AI is our successor and we retire, or
3) The Friendliness scenario, if it's really feasible.
This collapse of a huge spectrum of possibilities into three
human-society-based categories isn't all that convincing
From: Pei Wang [EMAIL PROTECTED]
[...] On the other hand, the part of NARS that is inconsistent
with PT (such as the induction rule and the abduction rule)
looks simply wrong, and it conflicts with the results of
experiments designed according to PT.
I took a brief look at your NARS site, but I
Pei Wang wrote:
It has been proven that any dynamical neural net that do
pattern recognition are equivalent to Bayesian classifiers;
Can you give me the reference? Thanks!
I'm reading the book Richard M Golden (1996) Mathematical
Methods for Neural Network Analysis Design. Basically:
(1) A
From: Ben Goertzel [EMAIL PROTECTED]
Well, this appears to be the order we're going to do for the Novamente
project -- in spite of my feeling that this isn't ideal -- simply due to the
way the project is developing via commercial applications of the
half-completed system. And, it seems likely
From: Ben Goertzel [EMAIL PROTECTED]
So far our work in this area has been more in the vein of narrow AI using a
half-completed wannabe-AGI system, but I'm curious to see how the molecular
biology software applications make use of the AGI capabilities of Novamente
when/if they finally become
From: Philip Sutton
Does anyone have an up-to-date fix on how much computation occurs
(if any) within-cells (as opposed to the traditional neural net
level) that
are part of biolgical brain systems? especially in the case of
animals that have a premium placed on the number of
From: Eliezer S. Yudkowsky [EMAIL PROTECTED]
Abstract:
Neurons carry out the many operations that extract meaningful
information from sensory receptor arrays at the organisms periphery
and translate these into action, imagery and memory. Within todays
dominant computational paradigm, these
From: Ben Goertzel [EMAIL PROTECTED]
YKY wrote:
I agree that uploading is not easy. Notice that your idea
of recursive self-improvement being able to work wonders
may also be very much hyped =) Intuitively I guess the
rate of RSI might be roughly inversely proportional to
the complexity of
From: Ben Goertzel [EMAIL PROTECTED]
I agree with you that there will be no limits to the
above 2 processes. What I'm skeptical about is how do
we exploit this possibility. I cannot imagine how an
AI can impose (can't think of better word) a morality
on all human beings on earth, even given
From: Brad Wyble [EMAIL PROTECTED]
The jury is very much out Phillip. Eliezer goes too far in saying it's a
myth perpetuated by computer scientists. They use the simplest
representations they know to exist in their models for purposes of
parsimony. It's hard to fault them for being rigorous
From: Brad Wyble [EMAIL PROTECTED]
Nonlinear dendritic integration can be accurately captured by the
comparmental model which divides dendrites into small sections
with ion channels and other internal reaction mechanisms. This
is the most accurate level of modeling. It may be possible to
Ben Goertzel wrote:
But the different trials need not be independent --- we can save the
trajectory of each AI's development continuously, and then restart a new
branch of AI x at time y for any recorded AI x at any recorded time point
y.
Also, we can intentionally form composite AI's by
My thoughts on the idea of an open AGI project:
1. I think a testbed for AGI already exists, it's called the job
market. We should help baby AGIs find work in real job markets.
I think there might be some places on the internet trying to find
applications of traditional kinds of AIs, but I'm not
From: Ben Goertzel [EMAIL PROTECTED]
1. I think a testbed for AGI already exists, it's called the job
market. We should help baby AGIs find work in real job markets.
I think there might be some places on the internet trying to find
applications of traditional kinds of AIs, but I'm not sure
Hi...
I'm wondering how AGI designers view this issue. Usually
we think connectionist systems have the advantages of:
1) generalization and
2) graded / smooth response
among others.
I assume Novamente is using a symbolic representation,
which may become a difficult problem to solve once the
AGI
From: Shane [EMAIL PROTECTED]
Solomonoff induction says nothing at all about searching for
fast programs. If some sequence has a shortest program that
takes some totally crazy amount of computer time to compute
the next symbol, Solomonoff induction will weight toward the
output of this algorithm
From: J.Andrew Rogers [EMAIL PROTECTED]
I think (as Ben pointed out also) one of the major challenges in
AGI is to teach it to learn complex cognitive *procedures* (versus
learning of static *concepts* which is a well established area).
A distinction with only a minor difference. How is a
From: Pei Wang [EMAIL PROTECTED]
*. If you think your theory is compatible with AGIs developed by the
various groups on this list, what is unique in your approach that is
missing in other approaches?
The theory suggests some design features such as
1) separation of behavior and knowledge,
2) the
Hello all =)
I've just written an outline of my AGI 'theory',
which has drawn a lot of inspiration from discussions
on this list and other internet places.
http://www.geocities.com/GenericAI/Intro.htm
Hope to hear your comments
The theory is very general and abstract, and I think
it is
The compression approach is essentially a bottom-up,
clustering algorithm whose objective is to form high-level
concepts. I'm wondering if something analogous may be
formulated in the reverse direction, ie going from high level
concepts to low level representations. Maybe this is the
Let's start a thread on how to specify the virtual environment, or blocks world
interface.
The potential senses include:
1. vision
2. audition
3. speech (pre-processing of speech into phonetic elements)
4. linguistic (standardization of natural language, eg basic english)
5. touch
6.
However, I think that the spontaneous emergence of complex concepts like
prepositional ones from sensory inputs is not very practical, and will
take an insane amount of compute time to occur, even though it's
possible.
I think that, in practice, we'll need to use a combination of explicit
Hi all
I have talked to Ben briefly, about turning the AGIRI website
into a consortium. He and I agreed it would be a nonprofit
for now. Though personally I have aspirations of more
elaborate, for-profit objectives for it. But that'll depend on
further development.
I suggest the consortium
Sometimes I send to the list and the posts don't show up.
--
___
Find what you are looking for with the Lycos Yellow Pages
http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp?SRC=lycos10
---
To unsubscribe, change your
Personally my inclination is to stick with the name AI or some
permutation thereof due to its general recognizability.
Even though when you decompose it into the two words artificial and
intelligence some of the connotations aren't quite right;
nevertheless, the word has acquired so many
I think we have a significant disagreement about the relationship
between AGI research and business. I don't see why you think having
marketable products is essential to AGI research. AGI is about building
a digital mind, and doesn't *have* to be any more about business than
raising a
I don't think one needs to become as big as Microsoft or IBM to fund AGI
research very amply, however. I think AGI is best done by a small,
tightly focused team, with ongoing feedback from a larger group of
loosely affiliated scientists. If I had enough research funding to pay
for, say
Ben wrote:
[...]
To be more precise, we are not considering something as narrow as a blocks
world, though we are considering a simulated world.
My strong feeling is that a lot of the concepts learned in a simulation
world could be used by an AI in the real world. If this is not the case
Ben wrote:
In a bottom-up hierarchy of concepts (built up from micro-features)
I'm afraid it is impossible to change to an entirely new bottom
without having to rebuild the whole structure.
Well, I disagree. I can prove you're wrong about impossible, but the
interesting question is
I want to ask: is your class of algorithms guaranteed to terminate
in a *bounded* time? If there is no such guarantee then things may get
very complicated, bordering on the undecidable.
No guarantees -- merely probably approximately correct.
That's the way intelligence is, IMO
Hello
I have updated my website again, and also revised the design map
so it is more detailed now:
http://www.geocities.com/genericAI/DesignMap.gif
For those of you too busy to read my webpage the design map is an
easy way to understand what are the elements of my model.
I'm currently in the
When I suggested filing an AGI-related patent, I was only being
practical because I figured that total abolishment of IP for
software/algorithms is rather unlikely. I'm not really qualified
to comment on the patent system since I'm only familiar with it
from the inventor's perspective. I would
I just put demos of NARS 4.2 (a Java version and a Prolog version) and
several recent papers at
http://www.cogsci.indiana.edu/farg/peiwang/papers.html.
Comments are welcome.
Pei
Hello =)
I just took a brief look at your web site and demos. It's good that
you have probably the only AGI
I noticed that too. Seemed like this list doesn't archive attachments
(or has particularly good SPAM filter :-). I don't have the paper posted
on any site. Will send you a PDF (748 KB). If others want a copy, let me
know via email.
Thanks!
J. W.
Hi
Please send me a copy too, thanks.
Ben wrote:
I'm more interested in the AGI-SIM approach, however...
But why similate at all? We can have a sensory front-end that
abstracts sensory experiences into input languages of choice,
such as propositional/predicate logic. (In fact, I find this
to be a very promising approach, because
I think the time is now right to develop a sensory frontend.
It need not be very sophisticated, or with high resolution, but the
point is to let AGIs learn physical concepts such as space-time, objects,
colors, etc. After the AGI has learned natural languages the input can
be entirely textual.
Sound is almost certainly easier sensory data to process than visual,
primarily because it is processed as parallel one-dimensional streams
(in the brain and often in computers, but a good idea in the abstract)
rather than trying to map a 2+ dimensional field like vision. Sound
makes a good
INPUT
=
I suggest it should initially include 3 inputs:
1) vision
The visual input would be the most computationally demanding,
and I suggest to reduce the resolution to as low as 32x32.
2) text
These text inputs will be passed literally without
hey -- good idea!!
In fact, we already have a beta user interface that does something like
this, in a limited context. You can see certain Novamente productions in
both English form and internal node and link representation form.
However, this is mostly only useful for simple productions,
I forgot to ask: what would be a good programming interface to
use? I mean for the visual module to communicate with the AGI
module?
I can code in Visual C++ or C#, but I want to make the frontend
the easiest to integrate with other programs (mainly for Windows).
I think Linux and others will be
Well, that's tricky. Because perceptual processing is data-intensive, using
standard but bloated representations like SUO-KIF or XML (yes I know the
former is semantic whereas the latter is syntactic, but both tend to be
bloated) is a bad idea for real-time interlinking btw perceptual and
Ben wrote:
I'm happy to contribute design ideas to your sensory front end project.
And if software development is required beyond what you can do yourself, and
a small amount of funding is achieved for the project, members of my team
could do the work at relatively low cost.
While this is
Hi Yan,
You may want to look into the work of David A Arathorn
in this regard. I have read his book Map-Seeking
Circuits in Visual Cognition and believe that his
approach to computer vision is both powerful and
flexible (although I have an intuition that a Bayesian
version of this sort
I like the idea of using a fundamentally time-savvy representation, and a
vector-based representation does that...
This is one of the stronger points in Jeff Hawkins' recent book On
Intelligence -- he reviews in detail how most human perception, including
visual perception, is
Thanks for all the input about the sensory module.
Now I'm looking for a marketing person to help establish
business partnerships with other developers such as
hardware/robotics companies. It's very difficult to find
someone who is familiar with AGI and have the business
skills too. I'm
I just had a somewhat funny experience with the traditional AI research
community
Moshe Looks and I gave a talk Friday at the AAAI Symposium on Achieving
Human-Level Intelligence Through Integrated Systems and Research. Our talk
was an overview of Novamente; if you're curious our
Ben wrote:
|This paper indicates Jeff Hawkins' neuroscience theory gradually converging
|on ideas more similar to those in Novamente, via the use of the common
|language of probability theory.
|
|http://www.stanford.edu/~dil/RNI/DilJeffTechReport.pdf
The paper is very interesting, but their
the file is 0K.
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Are there any parallel boards that can be added on to PCs at
affordable prices? (I mean having the potential to be on
I'm not sure if this is relevant as I just caught the end of the thread,
but very cheap PCI or PC104 boards are available for around $200US running
linux, dos or
Where are the killer applications for the masses? You think many game
developers grok MPI? The most likely place would seem an online game
server.
Tasks like visual/speech recognition cannot be done in a robust
way unless you go parallel. I'm sure a host of hard algorithmic
problems also
I guess one problem (I'm doing neural network stuff) is
whether the *main* memory access rate can be increased
by using the Cell. If each subprocessor can access the
main memory independently that'd be a huge performance
boost.
The 256K local memory is not entirely ideal because,
like the brain,
One of the central issues in AGI would be how thoughts are represented.
To give an example, consider the line of reasoning: There are 4 apples on the table, and 5 people in the room. 5 is greater than 4. If each personeats one apple then there won't be enough apples for everyone.
I wonder how
The example you give is an interesting one from a developmental psychology perspective, because it illustrates what Jean Piaget called conservation of number, a cognitive
skill that young children don't display but school-age children do.
Regarding the formalization of the example in logical
I didn't show how the reasoning itself would be done in Novamente because my time was limited and the trains of reasoning would be pretty long!
We haven't yet tried NM on this kind of example but plan to do so in early 2006. This fall our main AGI goal is to get NM to automatically learn
Regarding how to select the appropriate reasoning rules to apply --- in Novamente this occurs on two levels:
1) some simple heuristics applied as a default
2) based on probabilistic rules that are learned based on experience (via the system's experience carrying out reasoning)
Note that there
What YKY suggested was to make an AGI based on a fixed set of reasoning rules and heuristics that are not pliable and adaptable based on experience.
I don't think this is viable in practice, I think one's system needs to be able to learn how to learn.Evolution is one example of a dynamic that is
Will Pearson wrote:
Define what you mean by an AGI. Learning to learn is vital if you wish to try and ameliorate the No Free Lunch theorems of learning.
Isuspect thatNo Free Lunch is not very relevant in practice. Any learning algorithm hasits implicitway of generalization and it mayturn out
William wrote:
I suspect that it will be quite important in competition between agents. If one agent has a constant method of learning it will be more easily predicted by an agent that can figure out its constant method(if it is simple). If it changes(and changes how it changes), then
it will be
However science is also a form of competitionbetween agents (humansbeing a type of agent), the winner being the most cited.
Let us say that your type of Intelligence becomes prevalent, it wouldbecome very easy to predict what this type of intelligence would find
interesting (just feed it all the
James Anderson: Are you still with us? I wish to ask you a few questions...
Yesterday I saw your post (from 1988!) on VisList:
http://www.vislist.com/articles/vislist-07-010-01.htm
in which you proposed a model-based approach to vision. I agree with all the points you made in that post.
My group
James:
This would risk factionalising the list and de-focusing it fromGENERAL AI.
I'm of the view that different parts of an intelligent agent could be programmed differently, because of domain-specific knowledge. So my goal is moretowards building an autonomous intelligent agent rather than a
Ben:
I have no near-term plans to open-source Novamente.I think thatwould be a bad idea for AGI safety reasons.I am worried that if we
truly succeed in making a human-level intelligence, and opened up thecode, some jerks might do really nasty things with it.
[ It's good that you raised these
Sanjay:
I fully agree here, AGI can be very dangerous in wrong hands.But same is the case with any powerful tech. Controlling the knowledge is only a temporary measure. In fact, general wisdom says that limiting the knowledge to a chosen few can be more dangerous. Power corrupts easily. Its misuse
Hi James
My model is also quite complicated. First you may read this:
http://www.geocities.com/genericai/Architecture.htm
where I use a sequence (or mesh) of features to recognize an object.
Then this explains how feature extraction can be done in an appearance-based way:
Thanks, Benfor holding the conference, and for persistently pushing the status of AGI forward.
I will try to submita presentation for my group's vision-for-AGI project, but I may not be able to participate physically at the workshop.
If accepted, I may consider filming a presentation as video
Bruce:
Hello, YKY.If needed, I'm confident AGIRIleadership will agree to host your videopresentation via AGIRI's Workshop's
forum (or other):http://www.agiri.org/forum/index.php?showforum=21If you haven't already, as a step towardthis direction, perhaps you may join
http://www.agiri.org/join and
Hi Ben et al
I have been thinking about the vision problem, it seems thatthe model-based approach is most promising. After studying a lot of real digital pics, I have confidence that,with this approach, a vision system can be developed that can recognize almost everything humans can (with proper
Ben:
The procedures contained inside nodes are expressed as tree structures which may be textually expressed in a language called Combo.These Combo procedures may be expanded into semantic nodes and links for the
purpose of reasoning on them.Also, inferentially derived knowledge expressed as
You are placing your aesthetic preferences for how an AGI should work over the data regarding how real intelligences do work. Knowledge clearly becomes proceduralized and inaccessible to reasoning
with use.
I see your point now. I guess proceduralization is quite necessary for efficiency, rather
If we want to increase content and get more people interested I think the best thing to devote our time and effort to is a wiki rather than a forum. Threads have little chance of staying on topic and finding
things in them as they meander around becomes nightmarish. As we can't present a
What I said in my previous reply was that something very like neural nets (with all the beneficial features for which people got interested in NNs in the first place) *can* do syntax, and all forms of abstract representation.
I do not think it is fair to say that they can't, only that the
On 7/12/06, James Ratcliff [EMAIL PROTECTED] wrote:
This is essential. If a long term plan would be made only formulated in terms of (very concrete) microlevel concepts there would be a near-infinity of possible plans, and plan descriptions would be enormously long, and would contain a lot of
(From a former Soar researcher) [...]
Generally, the bottom-up pattern based systems do better at noisy pattern recognition problems (perception problems like recognizing letters in scanned OCR text or building complex perception-action graphs where the decisions are largely probabilistic like
I tend to agree with Richard's view and I may build an AGI with symbolic, non-numericalinference.
1. As Russell pointed out, if the priors are not knownor are in extremely low precision,Bayes ruleis not very applicable. Number crunching with priors of 1-2 bits precision is garbage in, garbage
Let me reply to everyone here...
Pei: You said non-numericheuristics (such as endorsement theory) may run into problems. Yes, but I believe those problems can be solved using further heuristics (eg see wikipedia article on Nixon diamond). If you resolve the Nixon diamond by referring to
On 8/5/06, Russell Wallace [EMAIL PROTECTED] wrote:
Now, figuring out all the heuristical NTV / symbolic qualifier'supdate rules, such thatan AGI will always be internally consistent, and provably increasing in accuracy, is a very non-trivial task.
Well indeed it is of course impossible, no
On 8/6/06, Charles D Hixson [EMAIL PROTECTED] wrote:
Not strange at all.The brain had a long evolutionary history before language was ever created.Languages are attempts to model parts of the organization of the brain (and NOT attempts at a complete modeling).
Therefore it's reasonable to
On 8/6/06, Richard Loosemore [EMAIL PROTECTED] wrote: I too am a little puzzled by Ben's reservations here. Is it because Yan implied that the rule would be applied literally, and
therefore it would be fragile (e.g. there might be a case where the threshold for significantly was missed by a
On 8/6/06, Pei Wang [EMAIL PROTECTED] wrote:
I think the brain is actually quite smart, perhaps due to intense selection for intelligence over a long period of time dating back to fishes.I suspect that the brain actually has an internal representation somewhat
similar to predicate logic.
On 8/6/06, Pei Wang [EMAIL PROTECTED] wrote: If you just want an advanced production system, why bother to build your own, but not to simply use Soar or ACT-R? Both of them allow you
to define your own rules.
Indeed, when Allen Newell designed Soar, he's meant it to be a unified cognitive
On 8/7/06, J. Andrew Rogers [EMAIL PROTECTED] wrote: On Aug 5, 2006, at 1:05 PM, Yan King Yin wrote: Suppose a person has a definition of pi in his mind, but we don't
know if it's the correct one.But if he succeeds in telling us many digits of pi that are correct, then it is overwhelmingly
On 8/7/06, Pei Wang [EMAIL PROTECTED] wrote:
At the beginning, I also believed that first-order predicate logic (FOPL) plus probability theory and fuzzy logic is the way to go, like many others in the field. It is only after I ran into many problems,
that I began to build my alternative, NARS.
On 8/8/06, Pei Wang [EMAIL PROTECTED] wrote: To assign truth-values (your probability) to events only is not enough for AGI, though you are right that you cannot really assign
them to universal statements, which are binary by defintion. To me, the general statements (your implication) in
On 8/9/06, Pei Wang [EMAIL PROTECTED] wrote:
There are two different issues: whether an external communication language needs to be multi-valued, and whether an internal representation language needs to be multi-valued. My answer to the
former is No, and to the latter is Yes. Many people
On 8/8/06, J. Andrew Rogers [EMAIL PROTECTED] wrote:C'mon, the brain is not so dumb. Which is precisely why it does not retain patterns more complex than
is strictly necessary to get the job done.The most efficient representation of pi, for almost all practical purposes, is as a sequence of
I think compression isessential tointelligence,but the difference between lossy and lossless may make the algorithms quite different.
But why notlet competitorscompress lossily?As far asprediction goes, the testing part is still the same!
If you guys have a lossy version of the prize I will
Phil wrote:
YKY is advocating the post-modern viewpoint that knowledge is context-dependent, and true-false assignments and numeric value judgements are both extremely problematic.Pei is pointing out the
commonsense, classicist position, and also the refutation of the post-modern tradition,
On 8/19/06, Ben Goertzel [EMAIL PROTECTED] wrote:
The problem of context may be avoided by using an unambiguous language (for internal representation).Context-dependent words are a feature of natural language (NL) only.It arises when an NL word maps to multiple concepts in
the knowledge
On 8/19/06, Ben Goertzel [EMAIL PROTECTED] wrote: Well, but I can generate a hypothetical grounding for mushrooom pie on the fly even though I haven't seen one ;-)
And I can form concepts of mathematical structures that I have never experienced nor exemplified and may in fact be inconsistent
On 8/19/06, Ben Goertzel [EMAIL PROTECTED] wrote: In blackboard the NL word maps to either a board that is black in color
or a board for writing that is usually black/green/white.The KR of those concepts are unambiguous; it's just that there are 2 alternatives. This is very naive...a
Isupport opensource AGIwith the following reasons:
1. It would be nearly impossible to enforcethe single-AGI scenario; I think the best strategy is to start aproject and tryour best in it.
2. One possibilityis to make the AGI software commercial, but at a very low cost, and with differential
I have worked out a more detailed AGI architecture:
http://www.geocities.com/genericai/GI-Architecture.htm
But I'm still working on the webpages to explain the modules.
It seems very suitable for theMAGIC message-passing model.
I think it's the simplest architecture for general intelligence.
On 9/5/06, M. Riad [EMAIL PROTECTED] wrote: Sorry to barge into the conversation in this way, but YKY mentioned something I needed clarification with.
You said: Withlogic I can write down a rule for recognizing this pretty easily, mainly due to the use of symbolic variables. So you see the
I forgot to add that, unsupervised learning is also needed, and desirable,inthe G0 architecture. How to conduct unsupervised learning under logic would be an interesting research topic.
YKY
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to
On 9/6/06, Fredrik Heintz [EMAIL PROTECTED] wrote: And inductive approaches have problems with overfitting and thereby
lack of generality. They can find a pattern that very closely match your examples, but if you give it a radically new example it will utherly fail to generalize. Therefore the
On 9/6/06, M Riad [EMAIL PROTECTED] wrote: Interesting. ILP is new for me. I did some basic reading and it's really a different form of supervised learning. But I still don't see how this can help build general knowledge. Using your bottle example, lets assume your ILP system recognizes bottles
On 9/7/06, Fredrik Heintz [EMAIL PROTECTED] wrote:
I haven't studied G0 in detail, but one of our current research problems is the execution and monitoring of plans. We have one of the worlds fastest and most expressive planners, TALplanner, which is
forward-chaining domain-dependent planner
My guess at a good basis for KR is simply the cleanest, most powerful, and most general programming language I can come up with. That's because to learn
new concepts and really understand them, the AI will have to do the equivalent of writing recognizers, simulators, experiment
David Clark wrote:
I agree that an AGI fundamentally will be created by a combination of data (databases) and procedures (programs) but how large and by who the programs will be created has yet to be determined. Why do you assume that all AGI programs will be created by humans? Why couldn't an
1 - 100 of 442 matches
Mail list logo