Long ago I figured out how to build digital incremental transmissions. What
are they? Imagine a sausage-shaped structure with the outside being many
narrow reels of piano wire, with electrical and computer connections on the
end. Under computer control, each of the rings can be independently
, Aug 11, 2010 at 8:37 PM, Ben Goertzel b...@goertzel.org wrote:
On Wed, Aug 11, 2010 at 11:34 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
Ben,
It seems COMPLETELY obvious (to me) that almost any mutation would shorten
lifespan, so we shouldn't expect to learn much from
Bryan,
*I'm interested!*
Continuing...
On Tue, Aug 10, 2010 at 11:27 AM, Bryan Bishop kanz...@gmail.com wrote:
On Tue, Aug 10, 2010 at 6:25 AM, Steve Richfield wrote:
Note my prior posting explaining my inability even to find a source of
used mice for kids to use in high-school anti-aging
Ben,
Genescient has NOT paralleled human mating habits that would predictably
shorten life. They have only started from a point well beyond anything
achievable in the human population, and gone on from there. Hence, while
their approach may find some interesting things, it is unlikely to find the
Ben,
It seems COMPLETELY obvious (to me) that almost any mutation would shorten
lifespan, so we shouldn't expect to learn much from it. What particular
lifespan-shortening mutations are in the human genome wouldn't be expected
to be the same, or even the same as separated human populations. Hmmm,
Ben,
On Mon, Aug 9, 2010 at 1:07 PM, Ben Goertzel b...@goertzel.org wrote:
I'm speaking there, on Ai applied to life extension; and participating in a
panel discussion on narrow vs. general AI...
Having some interest, expertise, and experience in both areas, I find it
hard to imagine much
Ben,
On Tue, Aug 10, 2010 at 8:44 AM, Ben Goertzel b...@goertzel.org wrote:
I'm writing an article on the topic for H+ Magazine, which will appear in
the next couple weeks ... I'll post a link to it when it appears
I'm not advocating applying AI in the absence of new experiments of
course.
Ben
On Sat, Aug 7, 2010 at 6:10 PM, Ben Goertzel b...@goertzel.org wrote:
I need to substantiate the case for such AGI
technology by making an argument for high-value apps.
There is interesting hidden value in some stuff. In the case of Dr. Eliza,
it provide a communication pathway to sick
John,
You brought up some interesting points...
On Fri, Aug 6, 2010 at 10:54 PM, John G. Rose johnr...@polyplexic.comwrote:
-Original Message-
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
On Fri, Aug 6, 2010 at 10:09 AM, John G. Rose johnr...@polyplexic.com
wrote
Ben,
Dr. Eliza with the Gracie interface to Dragon NaturallySpeaking makes a
really spectacular speech I/O demo - when it works, which is ~50% of the
time. The other 50% of the time, it fails to recognize enough to run with,
misses something critical, etc., and just sounds stupid, kinda like most
Ian,
I recall several years ago that a group in Britain was operating just such a
chatterbox as you explained, but did so on numerous sex-related sites, all
running simultaneously. The chatterbox emulated young girls looking for sex.
The program just sat there doing its thing on numerous sites,
To All,
I have posted plenty about statements of ignorance, our probable inability
to comprehend what an advanced intelligence might be thinking, heidenbugs,
etc. I am now wrestling with a new (to me) concept that hopefully others
here can shed some light on.
People often say things that
Mike,
Your reply flies in the face of two obvious facts:
1. I have little interest in what is called AGI here. My interests lie
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several
reasons, as it is directly applicable to Dr. Eliza, and because it casts a
shadow on future
layer, akin to human language etiquette.
I'm not sure how this relates, other than possibly identifying people who
don't honor linguistic etiquette as being (potentially) stupid. Was that
your point?
Steve
==
*From:* Steve Richfield [mailto:steve.richfi...@gmail.com]
To All,
I
be used for other tasks. The knowledge isn't portable.
I also wouldn't say I switched from absolute values to rates of change.
That's not really at all what I'm saying here.
Dave
On Wed, Aug 4, 2010 at 2:32 PM, Steve Richfield steve.richfi...@gmail.com
wrote:
David,
It appears that you may
is bad. It is just different and I really
prefer methods that are not biologically inspired, but are designed
specifically with goals and requirements in mind as the most important
design motivator.
Dave
On Wed, Aug 4, 2010 at 3:54 PM, Steve Richfield steve.richfi...@gmail.com
wrote:
David
with a test environment, success in forming a layered structure,
etc. This particular sub-field is still WIDE open and waiting for some good
answers.
Note that this same problem presents itself, regardless of approach, e.g.
AGI.
Steve
===
On Wed, Aug 4, 2010 at 4:33 PM, Steve Richfield
Matt,
On Tue, Aug 3, 2010 at 4:56 AM, tintner michael tint...@blueyonder.co.ukwrote:
I totally agree that surveillance will become ever more massive - because
it has v. positive as well as negative benefits. But people will find ways
of resisting and evading it - they always do. And it's
Sometime when you are flying between the northwest US to/from Las Vegas,
look out your window as you fly over Walker Lake in eastern Nevada. At the
south end you will see a system of roads leading to tiny buildings, all
surrounded by military security. From what I have been able to figure out,
you
Matt,
I grant you your points, but they miss the my point. Where is this
ultimately leading - to a superpower with the ability to kill its opponents
without any risk to itself. This may be GREAT so long as you agree with and
live under that superpower, but how about when things change for the
Matt,
On Mon, Aug 2, 2010 at 1:10 PM, Matt Mahoney matmaho...@yahoo.com wrote:
Steve Richfield wrote:
How about an international ban on the deployment of all unmanned and
automated weapons?
How about a ban on suicide bombers to level the playing field?
Of course we already have
Matt,
On Mon, Aug 2, 2010 at 1:05 PM, Matt Mahoney matmaho...@yahoo.com wrote:
Steve Richfield wrote:
I would feel a **LOT** better if someone explained SOME scenario to
eventually emerge from our current economic mess.
What economic mess?
http://www.google.com/publicdata?ds=wb-wdictype
Jan, Ian, et al,
On Sun, Aug 1, 2010 at 1:18 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:
It seems that *getting things right* is not a priority
for politicians.
Keeping things running is the priority.
... and there it is in crystal clarity - how things get SO screwed up in
small
Jan,
On Fri, Jul 30, 2010 at 4:47 PM, Jan Klauck jkla...@uni-osnabrueck.dewrote:
This brings me to where I came in. How do you deal with irrational
decision
making. I was hoping that social simulation would be seeking to provide
answers. This does not seem to be the case.
Have you ever
Arthur,
Your call for an AGI roadmap is well targeted. I suspect that others here
have their own, somewhat different roadmaps. These should all be merged,
like decks of cards being shuffled together, maybe with percentages
attached, so that people could announce that, say, I am 31% of the way to
Deepak,
An intermediate step is the reverse Turing test (RTT), wherein people or
teams of people attempt to emulate an AGI. I suspect that from such a
competition would come a better idea as to what to expect from an AGI.
I have attempted in the past to drum up interest in a RTT, but so far, no
Everyone has heard about the water analogy for electrical operation. I have
a mechanical analogy for neural operation that just might be solid enough
to compute at least some characteristics optimally.
No, I am NOT proposing building mechanical contraptions, just using the
concept to compute
To all,
There may be a fundamental misdirection here on this thread, for your
consideration...
There have been some very rare cases where people have lost the use of one
hemisphere of their brains, and then subsequently recovered, usually with
the help of recently-developed clot-removal surgery.
Ian, Travis, etc.
On Mon, Jun 28, 2010 at 6:42 AM, Ian Parker ianpark...@gmail.com wrote:
On 27 June 2010 22:21, Travis Lenting travlent...@gmail.com wrote:
I think crime has to be made impossible even for an enhanced humans first.
If our enhancement was Internet based it could be turned
of perception and coceptualization).
All of which is computation of various sorts, the basics of which need to be
understood.
Steve
=
On Sun, Jun 27, 2010 at 7:24 PM, Ben Goertzel b...@goertzel.org wrote:
On Sun, Jun 27, 2010 at 7:09 PM, Steve Richfield
steve.richfi...@gmail.com
with most of your points, but I don't find them original except
in phrasing ;)
... ben
On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
Ben, et al,
*I think I may finally grok the fundamental misdirection that current AGI
thinking has taken
, 2010 at 6:43 PM, Steve Richfield
steve.richfi...@gmail.comwrote:
Ben,
What I saw as my central thesis is that propagating carefully conceived
dimensionality information along with classical information could greatly
improve the cognitive process, by FORCING reasonable physics WITHOUT having
Travis,
The AGI world seems to be cleanly divided into two groups:
1. People (like Ben) who feel as you do, and aren't at all interested or
willing to look at the really serious lapses in logic that underlie this
approach. Note that there is a similar belief in Buddhism, akin to the
prisoners
Fellow Cylons,
I sure hope SOMEONE is assembling a list from these responses, because this
is exactly the sort of stuff that I (or someone) would need to run a Reverse
Turing Test (RTT) competition.
Steve
---
agi
Archives:
.
Steve
On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield
steve.richfi...@gmail.com wrote:
There has been an ongoing presumption that more brain (or computer)
means more intelligence. I would like to question that underlying
presumption.
That being the case, why don't
John,
Your comments appear to be addressing reliability, rather than stability...
On Mon, Jun 21, 2010 at 9:12 AM, John G. Rose johnr...@polyplexic.comwrote:
-Original Message-
From: Steve Richfield [mailto:steve.richfi...@gmail.com]
My underlying thought here is that we may all
Bromer
On Mon, Jun 21, 2010 at 11:19 AM, Steve Richfield
steve.richfi...@gmail.com wrote:
There has been an ongoing presumption that more brain (or computer)
means more intelligence. I would like to question that underlying
presumption.
That being the case, why don't elephants and other large
One constant in ALL proposed methods leading to computational intelligence
is formulaic operation, where agents, elements, neurons, etc., process
inputs to produce outputs. There is scant biological evidence for this,
and plenty of evidence for a balanced equation operation. Note that
unbalancing
John,
On Mon, Jun 21, 2010 at 10:06 AM, John G. Rose johnr...@polyplexic.comwrote:
Solutions for large-scale network stabilities would vary per network
topology, function, etc..
However, there ARE some universal rules, like the 12db/octave
requirement.
Really? Do networks such as
Russell,
On Mon, Jun 21, 2010 at 1:29 PM, Russell Wallace
russell.wall...@gmail.comwrote:
On Mon, Jun 21, 2010 at 4:19 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
That being the case, why don't elephants and other large creatures have
really gigantic brains? This seems
John,
Hmmm, I though that with your EE background, that the 12db/octave would
bring back old sophomore-level course work. OK, so you were sick that day.
I'll try to fill in the blanks here...
On Mon, Jun 21, 2010 at 11:16 AM, John G. Rose johnr...@polyplexic.comwrote:
Of course, there is the
No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
talk with my son Eddie, about self-organization theory. This is *his*proposal:
He suggested that I construct a simple NN that couldn't work without self
organizing, and make dozens/hundreds of different neuron and synapse
..
*From:* Steve Richfield steve.richfi...@gmail.com
*Sent:* Sunday, June 20, 2010 7:06 AM
*To:* agi agi@v2.listbox.com
*Subject:* [agi] An alternative plan to discover self-organization theory
No, I haven't been smokin' any wacky tobacy. Instead, I was having a long
talk with my son Eddie, about
Jim,
I'm trying to get my arms around what you are saying here. I'll make some
probably off the mark comments in the hopes that you will clarify your
statement...
On Sun, Jun 20, 2010 at 2:38 AM, Jim Bromer jimbro...@gmail.com wrote:
On Sun, Jun 20, 2010 at 2:06 AM, Steve Richfield
the field of your
choice by simply investing a low level of research effort, and waiting for
things to change. I have selected 3 narrow disjoint areas and now appear to
be a/the leader in each. I am just waiting for the world to recognize that
it desperately needs one of them.
Any thoughts?
Steve
from the very early days of
perceptrons.
Steve Richfield
===
On Wed, Jan 7, 2009 at 1:40 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
Abram,
On 1/6/09, Abram Demski abramdem...@gmail.com wrote:
Well, I *still* think you are wasting your time with flat
(propositional
this process started.
Simple learning methods have not worked well for reasons you mentioned
above. The question here is whether dp/dt methods blow past those
limitations in general, and whether epineuronal methods blow past best in
particular.
Are we on the same page here?
Steve Richfield
On Mon, Jan 5
of
opportunistic instantly-recognized principal components.
Any thoughts?
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
again for staying with me on this. I think we are gradually making
some real progress here.
Steve Richfield
=
On Fri, Jan 2, 2009 at 1:36 PM, Steve Richfield
steve.richfi...@gmail.com wrote:
Abram,
Oh dammitall, I'm going to have to expose the vast extent of my
to be interesting.
3. The acceptable fuzziness of recognition, e.g. just how accurately must
a feature match its pattern.
4. ??? What have I missed in this list?
5. Some or all of the above may be calculable based on ???
Thanks for your help.
Steve Richfield
nearly perfect mathematical components.
Most people don't think of their TV tuners as being analog computers, but...
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303
J. Andrew,
On 12/30/08, J. Andrew Rogers and...@ceruleansystems.com wrote:
On Dec 30, 2008, at 12:51 AM, Steve Richfield wrote:
On a side note, there is the clean math that people learn on their way
to a math PhD, and then there is the dirty math that governs physical
systems. Dirty math
of dp/dt space, as in
object space, this would probably exhaust a computer's memory before
completing.
Does this get the Loosemore Certificate of No Objection as being an
apparently workable method for substantially optimal unsupervised learning?
Thanks for considering this.
Steve Richfield
Richard,
On 12/25/08, Richard Loosemore r...@lightlink.com wrote:
Steve Richfield wrote:
Ben, et al,
After ~5 months of delay for theoretical work, here are the basic ideas
as to how really fast and efficient automatic learning could be made almost
trivial. I decided NOT to post the paper
Andrew,
On 12/24/08, J. Andrew Rogers and...@ceruleansystems.com wrote:
On Dec 24, 2008, at 10:33 PM, Steve Richfield wrote:
Of course you could simply subtract successive samples from one another -
at some considerable risk, since you are now sampling at only half the
Nyquist-required
Vladimir,
On 12/24/08, Vladimir Nesov robot...@gmail.com wrote:
On Thu, Dec 25, 2008 at 9:33 AM, Steve Richfield
steve.richfi...@gmail.com wrote:
Any thoughts?
I can't tell this note from nonsense. You need to work on
presentation,
I am having the usual problem that what is obvious
Richard,Richard,
On 12/25/08, Richard Loosemore r...@lightlink.com wrote:
Steve Richfield wrote:
There are doubtless exceptions to my broad statement, but generally,
neuron functionality is WIDE open to be pretty much ANYTHING you choose,
including that of an AGI engine's functionality
, etc. Hence, this may impose a cap on a future AGI's potential
abilities, especially if thegold is in #4, #5, etc.
Has someone already looked into this?
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
Philip,
On 12/24/08, Philip Hunt cabala...@googlemail.com wrote:
2008/12/24 Steve Richfield steve.richfi...@gmail.com:
Clearly, it would seem that no AGI researcher can program a level of
self-awareness that they themselves have not reached, tried and failed to
reach, etc
missing something really important here, this should COMPLETELY
transform the AGI field, regardless of the particular approach taken.
Any thoughts?
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
price to pay for a platform.
Steve Richfield
==
On 12/20/08, Valentina Poletti jamwa...@gmail.com wrote:
I have a question for you AGIers.. from your experience as well as from
your background, how relevant do you think software engineering is in
developing AI software
yet.
It isn't vaporware yet because they have made no claims of functionality.
In short, it has a LONG way to go before it can be considered to be
neuroscience vaporware.
Indeed, this article failed to make any case for any rational hope for
success.
Steve Richfield
directions, then you might want to reconsider.
Lotsa luck,
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member
of early-stage AGI...
There is already some of that creeping into some games, including actors who
perform complex jobs in changing virtual envrionments.
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
with a really fast global memory that completely obviates
the.complex caching they are proposing.
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription
show - what lies behind - their hidden thoughts and
emotions. And you wouldn't have posed your objection.
We obviously still have some issues regarding data vs. prospectively useful
information to iron out.
Steve Richfield
===
MT::
*Even words for individuals
memory speeds of ~100x
the clock speed, which still makes it a high-overhead operation on a machine
that peaks out at ~20K operations per clock cycle.
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https
a few if you promise to do something with them.
Indeed, AGI and physics simulation may be two of the app areas that have
the easiest times making use of these 80-core chips...
I don't think Intel is even looking at these. They are targeting embedded
applications.
Steve Richfield
Russell,
On 12/10/08, Russell Wallace [EMAIL PROTECTED] wrote:
On Wed, Dec 10, 2008 at 5:47 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
I don't see how, because it is completely unbounded and HIGHLY related to
specific platforms and products. I could envision a version that worked
many
bits of description about the individuals, but I could easily write a book
about thebin that the purest of them rise to fill.
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member
, somewhat akin to the original
Eliza program. However, I should prominently label the standard and
appropriate fields therein appropriately so that there is no future
confusion between machine knowledge and Dr. Eliza's sort of inverse machine
knowledge.
Any thoughts?
Steve Richfield
if some REALLY valuable
parts of what it might bring, namely, the solutions to many of the most
difficult problems, can come pretty cheaply, ESPECIALLY if you get your
proposal working..
Are we on the same page now?
Steve Richfield
--
*From:* Steve Richfield
what to do with it once done. Did you have a customer or marketing idea
in mind?
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https
asked.
Now, it you want either of these programs to really USE their knowledge
structure to do much more than just checking something off or parroting
something out, then you quickly see the distinction that I was pointing out.
Steve Richfield
--
*From:* Steve
.
Note Buddhism's belief structure that does NOT include a Deity.
Note Islam's various provisions for unbelievers to get a free pass, and
sometimes even break a rule here and there, so long as they pretend to
believe.
Any thoughts?
Steve Richfield
On 12/8/08, Philip Hunt [EMAIL
Matt,
On 12/6/08, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Sat, 12/6/08, Steve Richfield [EMAIL PROTECTED] wrote:
Internet AGIs are the technology of the future, and always will be. There
will NEVER EVER in a million years be a thinking Internet silicon
intelligence that will be able
Matt,
On 12/4/08, Matt Mahoney [EMAIL PROTECTED] wrote:
--- On Wed, 12/3/08, Steve Richfield [EMAIL PROTECTED] wrote:
I appears obvious to me that the first person who proposes the
following things together as a workable standard, will own the future
'web. This because the world will enter
as an RFC and put it out there. It
sounds like you could easily utilize a USENET group for early demos. Note
that Microsoft maintains some test groups on some of its servers, that Dr.
Eliza already uses without problems for its inter-incarnation communication.
Steve Richfield
to incorporate (some of) their own
capabilities.
Seeing that Dr. Eliza's approach is quite different, they should then figure
out that their only choices are to join or die. I wonder how they would
respond? You know these guys. How would YOU play this hand?
Any thoughts?
Steve Richfield
?
Steve Richfield
===
On Wed, Dec 3, 2008 at 3:55 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Steve,
Based on your attached response, How about this alternative approach:
Send (one of) them an email pointing out
http://www.dreliza.com/standards.php which will obviously
, if the
relationship were an adjective, it could be omitted. Interestingly, English
doesn't even have these adjectives in its lexicon, which makes some BIG gaps
in the representable continuum.
Steve Richfield
=
Steve Richfield wrote:
Mike,
On 12/1/08, *Mike Tintner* [EMAIL PROTECTED] mailto
. I suspect that a merger of
technologies might be a world-beater.
I wonder if the folks at Cycorp would be interested in such an effort?
BTW, http://www.DrEliza.com is up and down these days, with plans for a new
and more reliable version to be installed next weekend.
Any thoughts?
Steve
.
Have I answered your question?
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret
is in ERROR, unless by some
wild stroke of luck, it is possible to say EXACTLY what is meant.
As an interesting aside Bayesian programs tend (89%) to state their
confidence, which overcomes some (13%) of such problems.
Steve Richfield
=
On 12/1/08, Mike Tintner [EMAIL PROTECTED] wrote
as a child, the filing might be quite different.
Any thoughts?
Steve Richfield
On 11/29/08, Jim Bromer [EMAIL PROTECTED] wrote:
One of the problems that comes with the casual use of analytical
methods is that the user becomes inured to their habitual misuse. When
a casual
be concentrated on
adjectives rather than nouns, adverbs instead of verbs, etc. I noticed this
when hand coding rules for Dr. Eliza - that the modifiers seemed to be much
more important than the referents.
Maybe this hint from wetware will help someone.
Steve Richfield
positions here are based on a presumption that an AGI can be constructed *
without* that theory of everything being in hand. I think that we have an
RRA proof here that this is NOT possible. Nonetheless, it IS interesting to
be a fly on the wall and watch people try.
Steve Richfield
Richard,
On 11/20/08, Richard Loosemore [EMAIL PROTECTED] wrote:
Steve Richfield wrote:
Richard,
Broad agreement, with one comment from the end of your posting...
On 11/20/08, *Richard Loosemore* [EMAIL PROTECTED] mailto:
[EMAIL PROTECTED] wrote:
Another, closely related thing
coming super-intelligent AGI will probably have to master RRAA to
be able to resolve intractable disputes, so you will have to be on top of
RRAA if you are to have any chance of debugging your AGI.
Steve Richfield
==
On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield
[EMAIL
.
At that point, either there is an AGI to take over, or that society will
take over.
In short, this is a complex area that is really worth understanding if you
are interested in where things are going.
Steve Richfield
---
agi
Archives: https://www.listbox.com
, etc., could verify that they are dealing with
people who don't have any of the common forms of societal insanity. Perhaps
the site should be multi-lingual?
Any and all thoughts are GREATLY appreciated.
Thanks
Steve Richfield
---
agi
Archives: https
now?
Steve Richfield
2008/11/18 Steve Richfield [EMAIL PROTECTED]
To all,
I am considering putting up a web site to filter the crazies as follows,
and would appreciate all comments, suggestions, etc.
Everyone visiting the site would get different questions
Bob,
On 11/18/08, Bob Mottram [EMAIL PROTECTED] wrote:
2008/11/18 Steve Richfield [EMAIL PROTECTED]:
I am considering putting up a web site to filter the crazies as
follows,
and would appreciate all comments, suggestions, etc.
This all sounds peachy in principle, but I expect it would
words as open for
editing.
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244id_secret
to be playing with a
full deck.
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id
has, that he would question that goal.
Thanks everyone for your comments.
Steve Richfield
=
--- On *Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED]*wrote:
From: Steve Richfield [EMAIL PROTECTED]
Subject: Re: [agi] My prospective plan to neutralize AGI and other
dangerous
on
WBE is, at this point in time, a wild goose chase. Good for keeping
neuroscientists employed, but of little value otherwise.
Neuroscientists are probably the most-wrong group you could find. They are
NOT oriented toward making working hardware, there isn't a mathematician
among them, etc.
Steve
-fastened blinders on. It appears to me that
Aubrey has drawn some well reasoned conclusions from some rather
questionable data. At minimum he has propelled various efforts (and possibly
stunted others), which almost certainly has some value, regardless of the
validity of his conclusions.
Steve
by a system-gone-berserk.
In this crazy light, I cut Aubrey no slack at all, but still remain open
minded about whether he is a real futurist, or a pretent futurist. Perhaps
only time will tell.
Steve Richfield
---
agi
Archives: https://www.listbox.com/member
on it.
I'm really looking forward to meeting you at Convergence08. I'd gladly trade
a dinner for a cooks tour of Novamente, et al. Perhaps others here would
like to be in on this.
Steve Richfield
---
agi
Archives: https://www.listbox.com/member/archive
1 - 100 of 233 matches
Mail list logo