David Jones wrote:
> I should also mention that I ran into problems mainly because I was having a 
>hard time deciding how to identify objects and determine what is really going 
>on 
>in a scene.

I think that your approach makes the problem harder than it needs to be (not 
that it is easy). Natural language processing is hard, so researchers in an 
attempt to break down the task into simpler parts, focused on steps like 
lexical 
analysis, parsing, part of speech resolution, and semantic analysis. While 
these 
problems went unsolved, Google went directly to a solution by skipping them.

Likewise, parsing an image into physically separate objects and then building a 
3-D model makes the problem harder, not easier. Again, look at the whole 
picture. You input an image and output a response. Let the system figure out 
which features are important. If your goal is to count basketball passes, then 
it is irrelevant whether the AGI recognizes that somebody is wearing a gorilla 
suit.

 -- Matt Mahoney, matmaho...@yahoo.com




________________________________
From: David Jones <davidher...@gmail.com>
To: agi <agi@v2.listbox.com>
Sent: Sat, July 24, 2010 2:25:49 PM
Subject: Re: [agi] Re: Huge Progress on the Core of AGI

Abram,

I should also mention that I ran into problems mainly because I was having a 
hard time deciding how to identify objects and determine what is really going 
on 
in a scene. This adds a whole other layer of complexity to hypotheses. It's not 
just about what is more predictive of the observations, it is about deciding 
what exactly you are observing in the first place. (although you might say its 
the same problem).

I ran into this problem when my algorithm finds matches between items that are 
not the same. Or it may not find any matches between items that are the same, 
but have changed. So, how do you decide whether it is 1) the same object, 2) a 
different object or 3) the same object but it has changed. 

And how do you decide its relationship to something else...  is it 1) 
dependently attached 2) semi-dependently attached(can move independently, but 
only in certain ways. Yet also moves dependently) 3) independent 4) sometimes 
dependent 5) was dependent, but no longer is, 6) was dependent on something 
else, but then was independent, but now is dependent on something new. 


These hypotheses are different ways of explaining the same observations, but 
are 
complicated by the fact that we aren't sure of the identity of the objects we 
are observing in the first place. Multiple hypotheses may fit the same 
observations, and its hard to decide why one is simpler or better than the 
other. The object you were observing at first may have disappeared. A new 
object 
may have appeared at the same time (this is why screenshots are a bit 
malicious). Or the object you were observing may have changed. In screenshots, 
sometimes the objects that you are trying to identify as different never appear 
at the same time because they always completely occlude each other. So, that 
can 
make it extremely difficult to decide whether they are the same object that has 
changed or different objects.

Such ambiguities are common in AGI. It is unclear to me yet how to deal with 
them effectively, although I am continuing to work hard on it. 


I know its a bit of a mess, but I'm just trying to demonstrate the trouble I've 
run into. 


I hope that makes it more clear why I'm having so much trouble finding a way of 
determining what hypothesis is most predictive and simplest.

Dave


On Thu, Jul 22, 2010 at 10:23 PM, Abram Demski <abramdem...@gmail.com> wrote:

David,
>
>What are the different ways you are thinking of for measuring the 
>predictiveness? I can think of a few different possibilities (such as 
>measuring 
>number incorrect vs measuring fraction incorrect, et cetera) but I'm wondering 
>which variations you consider significant/troublesome/etc.
>
>--Abram
>
>
>On Thu, Jul 22, 2010 at 7:12 PM, David Jones <davidher...@gmail.com> wrote:
>
>It's certainly not as simple as you claim. First, assigning a probability is 
>not 
>always possible, nor is it easy. The factors in calculating that probability 
>are 
>unknown and are not the same for every instance. Since we do not know what 
>combination of observations we will see, we cannot have a predefined set of 
>probabilities, nor is it any easier to create a probability function that 
>generates them for us. That is just as exactly what I meant by quantitatively 
>define the predictiveness... it would be proportional to the probability. 
>
>>Second, if you can define a program ina way that is always simpler when it is 
>>smaller, then you can do the same thing without a program. I don't think it 
>>makes any sense to do it this way. 
>>
>>It is not that simple. If it was, we could solve a large portion of agi 
easily.
>>On Thu, Jul 22, 2010 at 3:16 PM, Matt Mahoney <matmaho...@yahoo.com> wrote:
>>David Jones wrote:
>>>> But, I am amazed at how difficult it is to quantitatively define more 
>>>>predictive and simpler for specific problems. 
>>>
>>>It isn't hard. To measure predictiveness, you assign a probability to each 
>>>possible outcome. If the actual outcome has probability p, you score a 
>>>penalty 
>>>of log(1/p) bits. To measure simplicity, use the compressed size of the code 
>>>for 
>>>your prediction algorithm. Then add the two scores together. That's how it 
>>>is 
>>>done in the Calgary challenge http://www.mailcom.com/challenge/ and in my 
>>>own 
>>>text compression benchmark.
>>>
>>> 
>>>-- Matt Mahoney, matmaho...@yahoo.com
>>>
>>>
>>>From: David Jones <davidher...@gmail.com>
>>>To: agi <agi@v2.listbox.com>
>>>Sent: Thu, July 22, 2010 3:11:46 PM
>>>Subject: Re: [agi] Re: Huge Progress on the Core of AGI
>>>
>>>Because simpler is not better if it is less predictive.
>>>
>>>On Thu, Jul 22, 2010 at 1:21 PM, Abram Demski <abramdem...@gmail.com> wrote:
>Jim,
>>>Why more predictive *and then* simpler?
>>>--Abram
>>>On Thu, Jul 22, 2010 at 11:49 AM, David Jones <davidher...@gmail.com> wrote:
An Update....
>>>I think the following gets to the heart of general AI and what it takes to 
>>>achieve it. It also provides us with evidence as to why general AI is so 
>>>difficult. With this new knowledge in mind, I think I will be much more 
>>>capable 
>>>now of solving the problems and making it work. 
>>>
>>>I've come to the conclusion lately that the best hypothesis is better 
>>>because it 
>>>is more predictive and then simpler than other hypotheses (in that order.... 
>>>more predictive... then simpler). But, I am amazed at how difficult it is to 
>>>quantitatively define more predictive and simpler for specific problems. 
>>>This is 
>>>why I have sometimes doubted the truth of the statement.
>>>In addition, the observations that the AI gets are not representative of all 
>>>observations! This means that if your measure of "predictiveness" depends on 
>>>the 
>>>number of certain observations, it could make mistakes! So, the specific 
>>>observations you are aware of may be unrepresentative of the predictiveness 
>>>of a 
>>>hypothesis relative to the truth. If you try to calculate which hypothesis 
>>>is 
>>>more predictive and you don't have the critical observations that would give 
>>>you 
>>>the right answer, you may get the wrong answer! This all depends of course 
>>>on 
>>>your method of calculation, which is quite elusive to define. 
>>>
>>>Visual input from screenshots, for example, can be somewhat malicious. 
>>>Things 
>>>can move, appear, disappear or occlude each other suddenly. So, without 
>>>sufficient knowledge it is hard to decide whether matches you find between 
>>>such 
>>>large changes are because it is the same object or a different object. This 
>>>may 
>>>indicate that bias and preprogrammed experience should be introduced to the 
>>>AI 
>>>before training. Either that or the training inputs should be carefully 
>>>chosen 
>>>to avoid malicious input and to make them nice for learning. 
>>>
>>>This is the "correspondence problem" that is typical of computer vision and 
>>>has 
>>>never been properly solved. Such malicious input also makes it difficult to 
>>>learn automatically because the AI doesn't have sufficient experience to 
>>>know 
>>>which changes or transformations are acceptable and which are not. It is 
>>>immediately bombarded with malicious inputs.
>>>I've also realized that if a hypothesis is more "explanatory", it may be 
>>>better. 
>>>But quantitatively defining explanatory is also elusive and truly depends on 
>>>the 
>>>specific problems you are applying it to because it is a heuristic. It is 
>>>not a 
>>>true measure of correctness. It is not loyal to the truth. "More 
>>>explanatory" is 
>>>really a heuristic that helps us find hypothesis that are more predictive. 
>>>The 
>>>true measure of whether a hypothesis is better is simply the most accurate 
>>>and 
>>>predictive hypothesis. That is the ultimate and true measure of correctness.
>>>Also, since we can't measure every possible prediction or every last 
>>>prediction 
>>>(and we certainly can't predict everything), our measure of predictiveness 
>>>can't 
>>>possibly be right all the time! We have no choice but to use a heuristic of 
>>>some 
>>>kind.
>>>So, its clear to me that the right hypothesis is "more predictive and then 
>>>simpler". But, it is also clear that there will never be a single measure of 
>>>this that can be applied to all problems. I hope to eventually find a nice 
>>>model 
>>>for how to apply it to different problems though. This may be the reason 
>>>that so 
>>>many people have tried and failed to develop general AI. Yes, there is a 
>>>solution. But there is no silver bullet that can be applied to all problems. 
>>>Some methods are better than others. But I think another major reason of the 
>>>failures is that people think they can predict things without sufficient 
>>>information. By approaching the problem this way, we compound the need for 
>>>heuristics and the errors they produce because we simply don't have 
>>>sufficient 
>>>information to make a good decision with limited evidence. If approached 
>>>correctly, the right solution would solve many more problems with the same 
>>>efforts than a poor solution would. It would also eliminate some of the 
>>>difficulties we currently face if sufficient data is available to learn from.
>>>In addition to all this theory about better hypotheses, you have to add on 
>>>the 
>>>need to solve problems in reasonable time. This also compounds the 
>>>difficulty of 
>>>the problem and the complexity of solutions.
>>>I am always fascinated by the extraordinary difficulty and complexity of 
>>>this 
>>>problem. The more I learn about it, the more I appreciate it.
>>>Dave
>>>agi| Archives| ModifyYour Subscription 
>>>

>>
>>
>>-- 
>>Abram Demski
>>http://lo-tho.blogspot.com/
>>http://groups.google.com/group/one-logic
>>agi| Archives| ModifyYour Subscription 
>>
agi| Archives| ModifyYour Subscription 
>agi| Archives| ModifyYour Subscription 
>
agi | Archives  | Modify Your Subscription  
>
>
>-- 
>Abram Demski
>http://lo-tho.blogspot.com/
>http://groups.google.com/group/one-logic
>
>agi | Archives  | Modify Your Subscription  

agi | Archives  | Modify Your Subscription  


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to