2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
Could you give me a little more detail about your thoughts on this?
Do you think the problem of increasing uncomputableness of complicated
complexity is the common thread found in all of the interesting,
useful but unscalable methods of AI?
Jim
Hi,
I have proposed a problem domain called function predictor whose
purpose is to allow an AI to learn across problem sub-domains,
carrying its learning from one domain to another. (See
http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )
I also think it would be useful if
2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
Hi,
I have proposed a problem domain called function predictor whose
purpose is to allow an AI to learn across problem sub-domains,
carrying its learning from one domain to another. (See
://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 10:17:44 AM
Subject: Re: [agi] Mushed Up Decision Processes
There was a DARPA program
: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 10:48:59 AM
Subject: Re: [agi] Mushed Up Decision Processes
On Sun, Nov 30, 2008 at 11:17 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
There was a DARPA program on transfer learning a few years back ...
I believe I
Regarding winning a DARPA contract, I believe that teaming with an
established contractor, e.g. SAIC, SRI, is beneficial.
Cheers,
-Steve
Yeah, I've tried that approach too ...
As it happens, I've had significant more success getting funding from
various other government agencies ... but
Subject: Re: [agi] Mushed Up Decision Processes
On Sun, Nov 30, 2008 at 11:17 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
There was a DARPA program on transfer learning a few years back ...
I believe I applied and got rejected (with perfect marks on the
technical proposal, as usual ...) ... I never
://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 12:16:41 PM
Subject: Re: [agi] Mushed Up Decision Processes
Stephen,
Does that mean what you did at Cycorp
, November 29, 2008 10:49 AM
To: agi@v2.listbox.com
Subject: [agi] Mushed Up Decision Processes
One of the problems that comes with the casual use of analytical
methods is that the user becomes inured to their habitual misuse. When
a casual familiarity is combined with a habitual ignorance
On Nov 30, 2008, at 7:31 AM, Philip Hunt wrote:
2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
In general, the standard AI methods can't handle pattern recognition
problems requiring finding complex interdependencies among multiple
variables that are obscured among scads of other variables
One of the problems that comes with the casual use of analytical
methods is that the user becomes inured to their habitual misuse. When
a casual familiarity is combined with a habitual ignorance of the
consequences of a misuse the user can become over-confident or
unwisely dismissive of criticism
Jim,
There is a large body of literature on avoiding overfitting, ie,
finding patterns that work for more then just the data at hand. Of
course, the ultimate conclusion is that you can never be 100% sure;
but some interesting safeguards have been cooked up anyway, which help
in practice.
My
Jim,
YES - and I think I have another piece of your puzzle to consider...
A longtime friend of mine, Dave, went on to become a PhD psychologist, who
subsequently took me on as a sort of project - to figure out why most
people who met me then either greatly valued my friendship, or quite the
Hi. I will just make a quick response to this message and then I want
to think about the other messages before I reply.
A few weeks ago I decided that I would write a criticism of
ai-probability to post to this group. I wasn't able remember all of
my criticisms so I decided to post a few
Well, if you're willing to take the step of asking questions about the
world that are framed in terms of probabilities and probability
distributions ... then modern probability and statistics tell you a
lot about overfitting and how to avoid it...
OTOH if, like Pei Wang, you think it's misguided
--- On Sat, 11/29/08, Jim Bromer [EMAIL PROTECTED] wrote:
I am not sure if Norvig's application of a probabilistic method to
detect overfitting is truly directed toward the agi community. In
other words: Has anyone in this grouped tested the utility and clarity
of the decision making of a
A response to:
I wondered why anyone would deface the
expression of his own thoughts with an emotional and hostile message,
My theory is that thoughts are generated internally and forced into words via a
babble generator. Then the thoughts are filtered through a screen to remove
any that
In response to my message, where I said,
What is wrong with the AI-probability group mind-set is that very few
of its proponents ever consider the problem of statistical ambiguity
and its obvious consequences.
Abram noted,
The AI-probability group definitely considers such problems.
There is a
On Sat, Nov 29, 2008 at 1:51 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
To me the big weaknesses of modern probability theory lie in
**hypothesis generation** and **inference**. Testing a hypothesis
against data, to see if it's overfit to that data, is handled well by
crossvalidation and
Whether an AI needs to explicitly manipulate declarative statements is
a deep question ... it may be that other dynamics that are in some
contexts implicitly equivalent to this sort of manipulation will
suffice
But anyway, there is no contradiction between manipulating explicit
declarative
Could you give me a little more detail about your thoughts on this?
Do you think the problem of increasing uncomputableness of complicated
complexity is the common thread found in all of the interesting,
useful but unscalable methods of AI?
Jim Bromer
Well, I think that dealing with
On Sat, Nov 29, 2008 at 11:53 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
Jim,
YES - and I think I have another piece of your puzzle to consider...
A longtime friend of mine, Dave, went on to become a PhD psychologist, who
subsequently took me on as a sort of project - to figure out why
22 matches
Mail list logo