://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible
On Mon, Jul 28, 2008 at 11:10 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Your inference trajectory assumes that cybersex and STD are
probabilistically independent within sex but this is not the case.
We only know that:
P(sex
On Mon, Jul 28, 2008 at 12:14 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
PLN uses confidence values within its truth values, with a different
underlying semantics and math than NARS; but that doesn't help much with the
above problem
by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
Nothing will ever be attempted if all possible objections must be
first overcome - Dr Samuel Johnson
---
agi
Archives
. Or even just folding proteins!
But it seems pretty obvious to me anyway that we will never be able to
predict the weather with any precision without doing an awful lot of
computation.
And what is our mind but the weather in our brains?
Terren
--- On Sun, 6/29/08, Ben Goertzel [EMAIL
... the evolutionary process
itself may be endlessly creative, but in that sense so may be the
self-modifying process of an engineered AGI ...
-- Ben G
On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam [EMAIL PROTECTED] wrote:
--- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
but I don't agree
but the result of a build-up
of dynamic tension. Self-organized criticality is
explained by the late Per Bak in _How Nature Works_, a short, excellent read
and an brilliant example of scientific and mathematical progress in the realm
of complexity.
--- On Mon, 6/30/08, Ben Goertzel [EMAIL
The argument itself is extremely rigorous: on all the occasions on which
someone has disputed the rigorousness of the argument, they have either
addressed some other issue entirely or they have just waved their hands
without showing any sign of understanding the argument, and then said ...
Richard,
I think that it would be possible to formalize your complex systems argument
mathematically, but I don't have time to do so right now.
Or, then again . perhaps I am wrong: maybe you really *cannot*
understand anything except math?
It's not the case that I can only understand
On Sat, Jun 28, 2008 at 4:13 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
Ed Porter wrote:
I do not claim the software architecture for AGI has been totally solved.
But I believe that enough good AGI approaches exist (and I think Novamente
is one) that when powerful hardware available to
Richard,
So long as the general response to the complex systems problem is not This
could be a serious issue, let's put our heads together to investigate it,
but My gut feeling is that this is just not going to be a problem, or
Quit rocking the boat!, you can bet that nobody really wants to
of programming
complex systems without adequate analysis.
Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
THINKING?
Steve Richfield
agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
The truth is, one of the big problems in
the field is that nearly everyone working on a concrete AI system has
**their own** particular idea of how to do it, and wants to proceed
independently rather than compromising with others on various design
points. It's hardly a herd mentality -- the
While the details vary widely, Mike and I were addressing the very concept
of writing code to perform functions (e.g. thinking) that apparently
develop on their own as emergent properties, and in the process foreclosing
on many opportunities, e.g. developing in variant ways to address problems
mean, work out your ideal way to solve the questions of the mind and share
it with us after you've have found some interesting results.
Jim Bromer
agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director
]
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director
But enough of that, let's get to the meat of it: Are you arguing that the
function that is a neuron is not an elementary operator for whatever
computational model describes the brain?
We don't know which function that describes a neuron we need to use --
are Izhikevich's nonlinear dynamics
: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
wonder why you don't join Stephen Reed on the texai project? Is it
because you don't like the open-source nature of his project?
ben
On Tue, Jun 3, 2008 at 3:58 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
One thing I don't get, YKY, is why you think you are going to take
textbook methods that have
As we have discussed a while back on the OpenCog mail list, I would like to
see a RDF interface to some level of the OpenCog Atom Table. I think that
would suit both YKY and myself. Our discussion went so far as to consider
ways to assign URI's to appropriate atoms.
Yes, I still think
First of all, the *tractability* of your algorithm depends on
heuristics that you design, which are separable from the underlying
probabilistic logic calculus. In your mind, these 2 things may be
mixed up.
Indefinite probabilities DO NOT imply faster inference.
Domain-specific heuristics
You have done something new, but not so new as to be in a totally
different dimension.
YKY
I have some ideas more like that too but I've postponed trying to sell them
to others, for the moment ;-) ... it's hard enough to sell fairly basic stuff
like PLN ...
Look for some stuff on the
/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease
indefinite probabilities?
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:
If you attach indefinite probabilities to FOL propositions, and create
indefinite probability formulas corresponding to standard FOL rules,
you will have a subset of PLN
But you'll have a hard time applying Bayes rule
I think it's fine that you use the term atom in your own way. The
important thing is, whatever the objects that you attach probabilities
to, that class of objects should correspond to *propositions* in FOL.
From there it would be easier for me to understand your ideas.
Well, no, we attach
I would imagine so, but I havent thought about the details
I am traveling now but will think about this when I get home and can
refresh my memory by rereading the appropriate sections of
Probabilistic Robotics ...
ben
On 6/2/08, Bob Mottram [EMAIL PROTECTED] wrote:
2008/6/2 Ben Goertzel [EMAIL
More likely though, is that your algorithm is incomplete wrt FOL, ie,
there may be some things that FOL can infer but PLN can't. Either
that, or your algorithm may be actually slower than FOL.
FOL is not an algorithm, it:s a representational formalism...
As compared to standard logical
Here are some examples in FOL:
Mary is female
female(mary)
Could be
Inheritance Mary female
or
Evaluation female mary
(the latter being equivalent to female(mary) )
but none of these has an uncertain truth value attached...
This is a [production] rule: (not to be confused with an
I'll respond to other points tomorrow or the day after (am currently
on a biz trip through Asia), but just one thing now... You say
With NO money, none of either of our efforts stands a chance. With some
realistic investment money, scanning would at minimum be cheap insurance
that you will be
Do OpenCog atoms roughly correspond to logical atoms?
Not really
And what is the counterpart of (logic) propositions in OpenCog?
ExtensionalImplication relations I guess...
I suggest don't use non-standard terminology 'cause it's very confusing...
So long as it's well-defined, I guess it's
I have briefly surveyed the research on uncertain reasoning, and found
out that no one has a solution to the entire problem. Ben and Pei
Wang may be working towards their solutions but a satisfactory one may
be difficult to find.
I think the PLN / indefinite probabilities approach is a
theory' of the brain that suggests that virtually all brain functions can be
modelled with Bayesian statistics.
The link (above) is a blog copy of the article in New Scientist.
-dave
agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO
Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry
mark,
What I'd rather do instead is see if we can get a .NET parallel track
started over the next few months, see if we can get everything ported, and
see the relative productivity between the two paths. That would provide a
provably true answer to the debate.
Well, it's an open-source
and would be happy and see huge benefits either way.
Mark
P.S. Thank you for the forward Ben.
- Original Message -
From: Ben Goertzel
To: [EMAIL PROTECTED]
Sent: Sunday, May 25, 2008 8:29 PM
Subject: Mark Waser arguing that OpenCog should be recoded in .Net ;-p
This email
On Mon, May 26, 2008 at 8:33 PM, J. Andrew Rogers
[EMAIL PROTECTED] wrote:
Replying to myself,
I'll let Mark have the last word since, after all, it is *his* project and
not mine. :-)
I assume that last sentence was sarcastic ;-)
Of course, while Mark is a valued participant in OpenCog, it's
25, 2008 at 6:26 AM, Panu Horsmalahti [EMAIL PROTECTED] wrote:
What is your approach on ensuring AGI safety/Friendliness on this project?
agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research
On Sun, May 25, 2008 at 10:42 AM, Mark Waser [EMAIL PROTECTED] wrote:
My own view is that our state of knowledge about AGI is far too weak
for us to make detailed
plans about how to **ensure** AGI safety, at this point
I disagree strenuously. If our arguments will apply to *all*
Please, if you're going to argue something --
please take the time to argue it and don't pretend that you can't magically
solve it all with your guesses (I mean, intuition).
time for mailing list posts is scarce for me these days, so sometimes I post
a conclusion w/out the supporting arguments
Mark,
For OpenCog we had to make a definite choice and we made one. Sorry
you don't agree w/ it.
I agree that you had to make a choice and made the one that seemed right to
various reason. The above comment is rude and snarky however --
particularly since it seems to come *because* you
://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
Richard wrote:
Then, when we came back from the break, Ben Goertzel announced that the
roundtable on symbol grounding was cancelled, to make room for some other
discussion on a topic like the future of AGI, or some such. I was
outraged by this. The subsequent discussion was a pathetic
YKY
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben
Hi,
Somebody could write an excellent paper about the
potential pitfalls of such an approach (detail, fidelity, deep causality
issues behind appearance, function, and inter-object + inter-feature
relationships, and so on). If nobody else is working in detail on
publishing such an analysis
Loosemore wrote:
I hear people enthusing about systems that are filled with holes that were
discovered decades ago, but still no fix. I read vague speculations and the
use of buzzwords ('Theory of Mind'!?). I see papers discussing narrow AI
projects.
I suppose there was all that at AGI-08
Richard wrote:
My god, Mark: I had to listen to people having a general discussion of
grounding (the supposed them of that workshop) without a single person
showing the slightest sign that they had more than an amateur's perspective
on what that concept actually means.
I guess you are
Now this looks like a fairly AGI-friendly approach to controlling
animated characters ... unfortunately it's closed-source and
proprietary though...
http://en.wikipedia.org/wiki/Euphoria_%28software%29
ben
---
agi
Archives:
/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely
at 5:39 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Now this looks like a fairly AGI-friendly approach to controlling
animated characters ... unfortunately it's closed-source and
proprietary though...
http://en.wikipedia.org/wiki/Euphoria_%28software%29
ben
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54
Yes, truly general AI is only possible in the case of infinite
processing power, which is
likely not physically realizable.
How much
Richard,
Question: How many systems do you know of in which the system elements
are governed by a mechanism that has all four of these, AND where the system
as a whole has a large-scale behavior that has been shown (by any method of
showing except detailed simulation of the system) to arise
No: I am specifically asking for some system other than an AGI system,
because I am looking for an external example of someone overcoming the
complex systems problem.
The specific criteria you've described would seem to apply mainly to living
systems ... and we just don't have that much
On Sun, Apr 27, 2008 at 5:51 PM, Mark Waser [EMAIL PROTECTED] wrote:
Engineering in the real world is nearly always a mixture of rigor and
intuition. Just like analysis of complex biological systems is.
AIEe! NO! You are clearly not an engineer because a true engineer
just
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO
://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
I said and repeat that we can engineer the complexity out of intelligence
in the Richard Loosemore sense.
I did not say and do not believe that we can engineer the complexity out
of intelligence in the Santa Fe Institute sense.
OK, gotcha...
Yeah... IMO, complexity in the sense you ascribe
. The combination of rigorous formulas applying to
restrictive
cases, together with intuition telling you where to apply what formulas,
works
OK.
Anyway this is a total digression, and I'm done w/ recreational
emailing for the day!
ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind
On Sat, Apr 26, 2008 at 10:03 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
In my opinion you can apply Gödel's theorem to prove that 100% AGI is not
possible in this world
if you apply it not to a hypothetical machine or human being but to the
whole universe which can be assumed to be a
Richard,
I've been too busy to participate in this thread, but, now I'll chip
in a single comment,
anyways... regarding the intersection btw your thoughts and Novamente's
current work...
You cited the following 4 criteria,
- Memory. Does the mechanism use stored information about what it was
://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED
person?
- Original Message - From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Saturday, April 26, 2008 2:14 PM
Subject: **SPAM** Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S
COMPLEXITY THEORIES---Mark's defense of falsehood
I believe the monsters
Ummm... just a little note of warning from the list owner.
Tintner wrote:
So I await your geometric solution to this problem - (a mere statement of
principle will do) - with great interest. Well, actually no. Your answer is
broadly predictable - you 1) won't have any idea here 2) will have
Richard,
How does this relate to the original context in which I cited this list
of four characteristics? It loks like your comments are completely outside
the original context, so they don't add anything of relevance.
I read the thread and I think my comments are relevant
Let me bring
On Wed, Apr 23, 2008 at 5:21 AM, Joshua Fox [EMAIL PROTECTED] wrote:
To return to the old question of why AGI research seems so rare, Samsonovich
et al. say
(http://members.cox.net/alexei.v.samsonovich/samsonovich_workshop.pdf)
'In fact, there are several scientific communities pursuing the
On Wed, Apr 23, 2008 at 11:29 AM, Mike Tintner [EMAIL PROTECTED] wrote:
Ben/Joshua:
How do you think the AI and AGI fields relate to the embodied grounded
cognition movements in cog. sci? My impression is that the majority of
people here (excluding you) still have only limited awareness of
to pay
programmers to write programs, at least some of the time. You can't
always rely upon voluntary effort, especially when the problem you
want to solve is fairly obscure.
On 19/04/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
Translation: We all (me included) now accept
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director
Potentially, though, massively distributed, collaborative open-source
software development could render your first premise false ...
Though it is unlikely to do so, because collaborative open-source
projects are best suited to situations in which the fundamental ideas behind
the
On Fri, Apr 18, 2008 at 5:35 PM, Mike Tintner [EMAIL PROTECTED] wrote:
Pei: I don't really want
a big gang at now (that will only waste the time of mine and the
others), but a small-but-good gang, plus more time for myself ---
which means less group debates, I guess. ;-)
Alternatively,
YKY,
I believe I've solved the fundamental issues behind the Novamente/OpenCog
design...
It's hard to tell whether you have really solved the AGI problem, at
this stage. ;)
Understood...
Also, your AGI framework has a lot of non-standard, home-brew stuff
(especially the knowledge
We may well see a variety of proto-AGI applications in different
domains, sorta midway between narrow-AI and human-level AGI, including
stuff like
-- maidbots
-- AI financial traders that don't just execute machine learning
algorithms, but grok context, adapt to regime changes, etc.
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
Archives: http://www.listbox.com
Hi Mark,
This is, by the way, my primary complaint about Novamente -- far too much
energy, mind-space, time, and effort has gone into optimizing and repeatedly
upgrading the custom atom table that should have been built on top of
existing tools instead of being built totally from scratch.
On Thu, Apr 17, 2008 at 2:42 PM, Mark Waser [EMAIL PROTECTED] wrote:
Really, work on the AtomTable has been a small percentage of work on
the Novamente Cognition Engine ... and, the code running the AtomTable is
now pretty much the same as it was in 2001 (though it was tweaked to make
it
| Modify Your Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
Archives
by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
Peruse the video:
http://www.youtube.com/watch?v=W1czBcnX1Wwfeature=related
Of course, they are only showing the best stuff. And I am sure there
is plenty of work left to do. But from the variety of behaviors that
are displayed, I would say that the problem of quadraped walking is
, Evgenii Philippov [EMAIL PROTECTED] wrote:
On Sat, Apr 5, 2008 at 7:37 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
For instance, I'll be curious whether ADIOS's automatically inferred
grammars can deal with recursive phrase structure, with constructs
like the person with whom I ate dinner
distinctive.
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
Archives: http
.
agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
PROTECTED] wrote:
Is it running inside Second Life already or it's another enviroment? (sorry
I don't know SL very well)
On Sat, Mar 29, 2008 at 11:40 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Nothing has been publicly released yet, it's still at the
research-prototype stage ... I'll announce
Thank you for your politeness and your insightful comments. I am
going to quit this group because I have found that it is a pretty bad
sign when the moderator mocks an individual for his religious beliefs.
FWIW, I wasn't joking about your algorithm's putative
divine inspiration in my role
Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
Archives: http
/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
or
something. Is it set up already?
Jim Bromer
On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become
So if I tell you to handle an object, or a piece of business, like say
removing a chair from the house - that word handle is open-ended and
gives you vast freedom within certain parameters as to how to apply your
hand(s) to that object. Your hands can be applied to move a given box, for
Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
Archives: http
one, right?
Hey - whatever helps. For me, it's a win-win. It would help me, and it
would help accomplish what you guys are trying to do.
Let me know,
~Aki
On Tue, Mar 25, 2008 at 10:40 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
This kind of diagram would certainly be meaningful
it
into a textbook
-- Ben
On Wed, Mar 26, 2008 at 9:49 AM, Mark Waser [EMAIL PROTECTED] wrote:
Hi Ben,
I have a publisher who would love to publish the result of the wiki as a
textbook if you are willing.
Mark
- Original Message -
From: Ben Goertzel [EMAIL PROTECTED
Hi Stephen,
Ben,
Wikipedia has significant overlap with the topic list on the AGIRI Wiki. I
propose for discussion the notion that the AGIRI Wiki be content-compatible
with Wikipedia along two dimensions:
license - authors agree to the GNU Free Documentation License
I have no problem with
/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods
BTW I improved the hierarchical organization of the TOC a bit, to
remove the impression that it's just a random grab-bag of topics...
http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook
ben
---
agi
Archives:
...
And then I'll save a lot of time during the next year, because when
someone emails me and asks me what they should read to get
up to speed on the general thinking in the AGI field, I'll just point
them to the non-textbook ;-)
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
of fun weekend ;-)
-- Ben
On Wed, Mar 26, 2008 at 10:43 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
OK... I just burned an hour inserting more links and content into
http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook
I'm burnt out on it for a while, there's too much other stuff on my plate
Now, let me ask you a question: Do you believe that all AI / AGI
researchers are toiling over all this for the challenge, or purely out of
interest? I doubt that as well. Surely there are those elements as drivers
- BUT SO IS MONEY.
Aki, you don't seem to understand the psychology of the
Hi Aki,
Even as a pure scientist, you can
accomplish more in research by producing wealth, than depending on gov't
grants. I say gov't grants because private investment is probably years
away from now. The topic of financing got a lot of attention at AGI 08.
Well, if you're an AGI
). To do that properly, I am waiting
for your book on Probabilistic Logic Networks to be published. Amazon says
July 2008... is that date correct?
Thanks!
agi | Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
, some mathematics,
etc.
***
-- Ben
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
| Archives | Modify Your Subscription
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
---
agi
601 - 700 of 1549 matches
Mail list logo