Matt,
On 5/9/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> > > > After many postings on this subject, I still assert that
> > > > ANY rational AGI would be religious.
> > >
> > > Not necessarily. You execute a program P that inputs the conditions
> of
> > > the game and outputs "1 box" or "2 bo
On Sat, May 10, 2008 at 1:14 AM, Stan Nilsen <[EMAIL PROTECTED]> wrote:
> A test of understanding is if one can give a correct *explanation* for any
> and all of the possible outputs that it (the thing to understand) produces.
Unfortunately, "explanation" is just as ambiguous a word as
"understand
William Pearson wrote:
After getting completely on the wrong foot last time I posted
something, and not having had time to read the papers I should have. I
have decided to try and start afresh and outline where I am coming
from. I'll get around to do a proper paper later.
There are two possible
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney <[EMAIL PROTECTED]>
> wrote:
> >
> > --- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> >>
> >> (I assume you mean something like P((P,y))=P(y)).
> >>
> >> If P(s)=0 (one answer to all questions), then P((P
Matt,
You asked "What would be a good test for understanding an algorithm?"
Thanks for posing this question. It has been a good exercise. Assuming
that the key word here is "understanding" rather than algorithm, I submit -
A test of understanding is if one can give a correct *explanation* f
On Sat, May 10, 2008 at 2:09 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
>>
>> (I assume you mean something like P((P,y))=P(y)).
>>
>> If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for
>> all y.
>
> You're right. But we wouldn
After getting completely on the wrong foot last time I posted
something, and not having had time to read the papers I should have. I
have decided to try and start afresh and outline where I am coming
from. I'll get around to do a proper paper later.
There are two possible modes for designing a com
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney <[EMAIL PROTECTED]>
> wrote:
> >
> > I claim there is no P such that P(P,y) = P(y) for all y.
>
> (I assume you mean something like P((P,y))=P(y)).
>
> If P(s)=0 (one answer to all questions), then P((P
--- Steve Richfield <[EMAIL PROTECTED]> wrote:
> Matt,
>
> On 5/8/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >
> >
> > --- Steve Richfield <[EMAIL PROTECTED]> wrote:
> >
> > > On 5/7/08, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> > > >
> > > > See http://www.overcomingbias.com/2008/01/newcom
Hi,
The Texai system, as I envision its deployment, will have the following
characteristics:
* a lot of processes
* a lot of hosts
* message passing between processes, that are arranged in a
hierarchical control system
* higher level processes will be deliberati
I'll try to explain it more..
Suppose you have a lot of processes, all containing some production rule(s).
They communicate with messages. They all should get cpu time somehow. Some
processes just do low-level responses, some monitor other processes, etc. Some
are involved in looking at the wor
Mike, what is your stance on vector images?
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=8660244&id_secret=101455
Jim,
I doubt that your "specification" equals my "individualization".
If I want to be able to recognize the individuals, Curtis/Brian/Carl/ and
Billi Bromer,only images will do it:
http://www.dunningmotorsales.com/IMAGES/people/Curtis%20Bromer.jpg
http://www.newyorksocialdiary.com/socialdiary/
- Original Message
Mike Tintner <[EMAIL PROTECTED]> said,
"The "making sense" level of your brain - an AGI that works - is
the level that seeks individual examples (and exceptions) for every
generalization.
A general intelligence doesn't just generalize, it individualizes. It can
tal
On 5/9/08, Stephen Reed <[EMAIL PROTECTED]> wrote:
Skill: Trimming the whitespace off both ends of a character string.
One of the many annoyances of writing real-world AI programs is having to write
this function; to replace the broken system functions that are supposed to do
this, but which
Matt,
On 5/9/08, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
> Skill: Trimming the whitespace off both ends of a character string.
>
One of the many annoyances of writing real-world AI programs is having to
write this function; to replace the broken system functions that are
supposed to do this,
Hi Matt,
You asked:
What would be a good test for understanding an algorithm?
As I mentioned before, I want to create a system capable of being taught -
specifically capable of being taught skills. And I strongly share your
interest in answers to this question. A student should be able to
And it doesn't literally "make much sense" because your blog has a lot of
generalizations with no examples - no
individualizations/particularisations of, for example, what
individual/particular problems your algorithms might apply to. The "making
sense" level of your brain - an AGI that works -
So many overloads - pattern, complexity, atoms - can't we come up with new
terms like schfinkledorfs? - but a very interesting question is - given an
image of W x H pixels of 1 bit depth (on or off), one frame, how many
"patterns" exist within this grid? When you think about it, it becomes an
extr
- Original Message
From: Matt Mahoney <[EMAIL PROTECTED]>
--- Jim Bromer <[EMAIL PROTECTED]> wrote:
> I don't want to get into a quibble fest, but understanding is not
> necessarily constrained to prediction.
What would be a good test for understanding an algorithm?
-- Matt Mahoney,
Right on. Everything I've read esp. Grandin, suggests strongly autism is
crucially hypersensitivity rather than an emotional disorder. If every time
the normal person touched someone, they got the equivalent of an electric
shock, they'd stay away from people too. [Thanks for your previous links
On Fri, May 9, 2008 at 4:29 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> I claim there is no P such that P(P,y) = P(y) for all y.
(I assume you mean something like P((P,y))=P(y)).
If P(s)=0 (one answer to all questions), then P((P,y))=0 and P(y)=0 for all y.
--
Vladimir Nesov
[EMAIL PROTECTE
- Original Message
From: Mike Tintner <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, May 8, 2008 9:05:22 PM
Subject: Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]
I just want to make the point that I think categorical "grounding" is necessary
for AGI, but I believe
Boris: I define intelligence as an ability to predict/plan by discovering &
projecting patterns within an input flow.
IOW a capacity to generalize. A general intelligence is something that
generalizes from incoming info. about the world.
Well, no it can't be just that. Look at what you write
Ryan,
Thanks for the clarifications and the links!
Cheers,
Brad
- Original Message -
From: "Bryan Bishop" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>;
<[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>
Sent: Wednesday, May 07, 2008 9:46 PM
Subject: Re: Accidental Genius
25 matches
Mail list logo