I think it's normal for tempers to flare during a depression. This
kind of technology really pays for itself. The only thing that matters
is the code
Eric B
On 10/12/08, Ben Goertzel [EMAIL PROTECTED] wrote:
No idea, Mentifex ... I haven't filtered out any of your messages (or
anyone's) ...
Brad,
But, human intelligence is not the only general intelligence we can imagine
or create. IMHO, we can get to human-beneficial, non-human-like (but,
still, human-inspired) general intelligence much quicker if, at least for
AGI 1.0, we avoid the twin productivity sinks of NLU and
Dave,
Sorry to reply so tardily. I had to devote some time to other, pressing,
matters.
First, a general comment. There seems to be a very interesting approach to
arguing one's case being taken by some posters on this list in recent days.
I believe this approach was evinced most
Ben,
Well, I guess you told me! I'll just be taking my loosely-coupled
...bunch of clever narrow-AI widgets... right on out of here. No need to
worry about me venturing an opinion here ever again. I have neither the
energy nor, apparently, the intellectual ability to respond to a broadside
Brad,
Sorry if my response was somehow harsh or inappropriate, it really wasn't
intended as such. Your contributions to the list are valued. These last
few weeks have been rather tough for me in my entrepreneurial role (it's not
the best time to be operating a small business, which is what
And, just to clarify: the fact that I set up this list and pay $12/month for
its hosting, and deal with the occasional list-moderation issues that
arise, is not supposed to give my **AI opinions** primacy over anybody
else's on the list, in discussions I only intervene as moderator when
On Sat, Oct 11, 2008 at 4:37 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
Brad,
Sorry if my response was somehow harsh or inappropriate, it really wasn't
intended as such. Your contributions to the list are valued. These last
few weeks have been rather tough for me in my entrepreneurial role
On Sat, Oct 11, 2008 at 11:37 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
These last
few weeks have been rather tough for me in my entrepreneurial role (it's not
the best time to be operating a small business, which is what Novamente LLC
is) so I may be in a crankier mood than usual for that
Ben,
I think that's all been extremely clear -and I think you've been very good in
all your different roles :). Your efforts have produced a v. good group -and a
great many thanks for them.
And, just to clarify: the fact that I set up this list and pay $12/month for
its hosting, and deal
If nothing else, for the price of a movie ticket per month, it provides me
many more hours of monthly entertainment ... with a much more interesting
cast of characters than any Hollywood flick ... though the plot development
gets confusing at times ;-)
And who knows, some of these discussions
No idea, Mentifex ... I haven't filtered out any of your messages (or
anyone's) ... but sometimes messages get held up at listbox.com by their
automated spam filters (or for other random reasons) and I take too long to
log in there and approve them...
ben
Well, how come my posts aren't
Ben Goertzel wrote:
And, just to clarify: the fact that I set up this list and pay $12/month for
its hosting, and deal with the occasional list-moderation issues that
arise, is not supposed to give my **AI opinions** primacy over anybody
else's on the list, in discussions I only intervene
On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
So, it has, in fact, been tried before. It has, in fact, always failed.
Your comments about the quality of Ben's approach are noted. Maybe you're
right. But, it's not germane to my argument which is that those parts of
A few points...
1)
Closely associating embodiment with GOFAI is just flat-out historically
wrong. GOFAI refers to a specific class of approaches to AI that wer
pursued a few decades ago, which were not centered on embodiment as a key
concept or aspect.
2)
Embodiment based approaches to AGI
Ben,
V. interesting and helpful to get this pretty clearly stated general position.
However:
To put it simply, once an AGI can understand human language we can teach it
stuff.
you don't give any prognostic view about the acquisition of language. Mine is -
in your dreams. Arguably, most
I think we're at the stage where a team of a couple dozen could do it in
5-10 years
I repeat - this is outrageous. You don't have the slightest evidence of
progress - you [the collective you] haven't solved a single problem of
general intelligence - a single mode of generalising - so you
Dr. Matthias Heger wrote:
*Ben G wrote*
**
Well, for the purpose of creating the first human-level AGI, it seems
important **to** wire in humanlike bias about space and time ... this
will greatly ease the task of teaching the system to use our language
and communicate with us effectively...
On Tue, Oct 7, 2008 at 10:43 AM, Charles Hixson
[EMAIL PROTECTED]wrote:
I feel that an AI with quantum level biases would be less general. It would
be drastically handicapped when dealing with the middle level, which is
where most of living is centered. Certainly an AGI should have modules
On Sun, Oct 5, 2008 at 3:55 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
More generally, as long as AGI designers and developers insist on
simulating human intelligence, they will have to deal with the AI-complete
problem of natural language understanding. Looking for new approaches to
this
Brad:Unfortunately,
as long as the mainstream AGI community continue to hang on to what
should, by now, be a thoroughly-discredited strategy, we will never (or
too late) achieve human-beneficial AGI.
Brad,
Perhaps you could give a single example of what you mean by non-human
intelligence.
On Sun, Oct 5, 2008 at 7:29 PM, Brad Paulsen [EMAIL PROTECTED] wrote:
[snip] Unfortunately, as long as the mainstream AGI community continue to
hang on to what should, by now, be a thoroughly-discredited strategy, we
will never (or too late) achieve human-beneficial AGI.
What a strange
David Hart wrote:
On Sun, Oct 5, 2008 at 7:29 PM, Brad Paulsen [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
[snip] Unfortunately, as long as the mainstream AGI community
continue to hang on to what should, by now, be a
thoroughly-discredited strategy, we will never (or too
Dr. Heger,
Point #3 is brilliantly stated. I couldn't have expressed it better. And
I know this because I've been trying to do so, in slightly broader terms,
for months on this list. Insofar as providing an AGI with a human-biased
sense of space and time is required to create a human-like AGI
23 matches
Mail list logo