Hi Jey,

You said:

This list is being dominated by nonsense because the scientifically
grounded people on this list don't want to take the time to refute
every piece of fantastic drivel. I certainly don't blame them for
wanting to focus on their time on other more productive projects, but
it ultimately reflects poorly on the list and scares away
scientifically grounded AGI newbies by reinforcing the idea that AGI
is the exclusive domain of "crackpots".

Readers sharing your viewpoint might join the completely non-drivel OpenCog 
mail list which is here.   But it only discusses OpenCog and related 
components. 

I was drawn to this list when Ben founded it as a forum for AGI developers.   
Regardless of how readers feel about my posts, and how I feel about the 
relevance of any particular thread, for me this list is very valuable.  It's 
sort of like panning for gold.  Every once in a while there is a great nugget 
to be found.  For example, some years ago, on the SL4 list, Ben told me about 
hierarchical control systems, which are now the key architectural feature of my 
project.  

If a thread is not relevant to my work, I skim it.   If I see something 
relevant, or if someone asks me a direct question, then I frame my response 
with details of my own work.  Most others here do not have a code base, or at 
least one that they can freely discuss.  Especially I regard descriptions of 
working systems valuable, even if the AGI approach is very different than my 
own, regardless of the wackiness factor.  What I am looking for is constructive 
critique of my work, forewarning of problems that I may eventually have to deal 
with, and potential solutions and capabilities that I can plan on implementing 
in Texai at some future point.

I hope other AGI developers share at least some of my views on the content of 
this list.  I'm very happy with the current level of moderation and policing.

Cheers,
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



----- Original Message ----
From: Jey Kottalam <[EMAIL PROTECTED]>
To: [email protected]
Sent: Saturday, May 17, 2008 12:29:59 PM
Subject: Re: [agi] Porting MindForth AI into JavaScript Mind.html

On Sat, May 17, 2008 at 10:09 AM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> I think Yudkowsky once said that AI remains at present largely the
> dominion of pioneers and mavericks.  It's certainly not yet a mature
> science.  The typical pattern which I've observed on AI forums and
> lists over the last 15 years goes something like the following:
>

Fully agreed. I'm drawing a distinction between scientifically
grounded pioneering mavericks (e.g. Yudkowsky) and unscientific
nonsense.

This list is being dominated by nonsense because the scientifically
grounded people on this list don't want to take the time to refute
every piece of fantastic drivel. I certainly don't blame them for
wanting to focus on their time on other more productive projects, but
it ultimately reflects poorly on the list and scares away
scientifically grounded AGI newbies by reinforcing the idea that AGI
is the exclusive domain of "crackpots".

-Jey Kottalam

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



      

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to