PS: The Chinese quantum code is hackable.

> Date: Sun, 8 Mar 2015 23:51:26 -0400
> Subject: Re: [agi] SAT and Dynamic Programs of Models
> From: [email protected]
> To: [email protected]
> 
>  > Jim, can you describe an algorithm where P = NP would exponentially
> > speed up visual processing?
> 
> The complexity of trying to find how each pixel (or tiny area) relates
> to other pixels is not a true Satisfiability problem until a (more
> classic-like) image analysis algorithm gathers some information. For
> example, is some area a part of the background? That is not an easy
> question because the 'background' may be quite diverse. Furthermore,
> the desired 'subjects' of an image may not be -perceived- before image
> analysis gets going. Once an algorithm starts picking up information
> than the variations of possibilities starts forming a true
> Satisfiaibilty problem although it may not be expressed in simple
> Boolean relations.
> 
> While most image analysis would take place across a field of data that
> does not mean that all image analysis are essentially neural networks.
> 
> I don't want to dismiss deep learning neural networks just because
> they have not achieved even shallow AGI. But look at character
> recognition. Alphabet characters have flat distinct shapes. Although
> they may vary widely, one might still design a classical algorithm
> that defines how near a subject character is to a set of training
> characters in the terms of vectors (and weighted reasoning) or
> something like that. The attempt to use of vectors with 'ideas' or
> semantic objects has been  inadequate because the domains of 'ideas'
> do not all fit into a domain of vector space. Space and much of
> physics have benefited from a great deal of effective mathematical
> analysis over the centuries. So computers are great at predicting the
> weather because the averages of history (of different measures of
> events) can be combined with knowledge of the causes of the weather
> and presto - you have some great weather predictions. But if you
> wanted to push deep anns what are you going to find? You are going to
> find that you have to push up against the complexity barriers.
> Although the Boolean relations between areas of an image in an ann may
> be hidden, they are never the less creating complexity barriers. Anns
> work so well because some problems can be effectively solved by using
> multiple metric approximations.
> 
> But, let's suppose that AGI will one day be accomplished using metrics
> - in other words weighted reasoning, probability and so on. I realize
> that it is a real possibility. That means that image analysis
> algorithms will be able to recognize anything a human being could
> recognize. Here the problem is not a question of taking tens or
> hundreds of flat land characteristics for tens of characters and
> coming up with a method that effectively measures how close a subject
> character is to different kinds of metrics derived from the training
> characters and then picks the closest matches, but of taking thousands
> of characteristics from thousands of objects to derive estimates of
> what is pictured. In this case the problem is combinatorially more
> complex just because there are so many more possible subjects and
> variants of those subjects. It should be clear that this is a true
> satisfiability problem even though Anns don't deal directly with the
> satisfiability issue because the acquired weights are so heavily
> distributed. The complexity barriers are effectively satisfiability
> problems even though the distinct relations may be hidden. I hope this
> makes some sense because I really don't know that much about deep
> learning Anns and I haven't really done that much image analysis.
> 
> One other thing. As you know, a neural network is not the only kind of
> algorithm that can learn. So it is pretty easy to imagine an algorithm
> that is based on supervised learning to develop discreet relations
> that form networks. The relations would of course include weights and
> so on. One of the reasons I would like to develop some better SAT
> methods is that I then could develop AI models using simple concepts
> and build on it. Right now the complexity barriers are so low and so
> pervasive that you can't get any real traction unless the problem can
> be solved using weighted approximations. So weighted reasoning is way
> ahead right now but that may not always be the case. My goal is not to
> achieve detailed precision but to get some basic traction by starting
> with something simple and improving it.
> 
> Jim Bromer
> 
> 
> On Fri, Mar 6, 2015 at 10:04 AM, Matt Mahoney via AGI <[email protected]> 
> wrote:
> > Jim, can you describe an algorithm where P = NP would exponentially
> > speed up visual processing? My understanding is that the most advanced
> > vision algorithms use deep neural networks with a structure similar to
> > the visual cortex. In general, neural network size (in synapses)
> > should be proportional to the training set information content. Thus,
> > training time is O(n^2).
> >
> > On Thu, Mar 5, 2015 at 10:01 PM, Jim Bromer via AGI <[email protected]> 
> > wrote:
> >>  Matt said:
> >> Vision is a pattern recognition problem. You input a picture of a cat
> >> and output a label like "cat". It is not NP-complete because (1)
> >> experimentally, the problem scales polynomially with input size and
> >> (2) the time to verify that a label like "cat" is correct is about the
> >> same as the time it takes to label the image. Thus, the problem is in
> >> P and would not benefit even if P = NP.
> >> -------------------------------------------------
> >> This is a truly insipid response. You have taken one narrow situation
> >> and used it in an over-generalization of a kind of AGI problem. "The
> >> problem scales polynomially with input size? The point that I made is
> >> that the general analysis of imagery is presumably as bad or worse
> >> than NP (in the lexicon of the day). What I mean is that there is
> >> sufficient evidence that AGI is, in the worse case, at least
> >> exponentially difficult and that makes it worthwhile to examine why
> >> that may be. One reason, the reason I gave, is that the easiest
> >> methods to make a methodical and thorough analysis of the relations
> >> between associated pixels would be those that are (literally) in NP.
> >> The implied case of scaling a particular picture and arguing that it
> >> would scale polynomially with input size is analogous to saying that
> >> converting an unrestricted Boolean Satisfiability problem to 3-SAT
> >> scales polynomially (and that somehow proves that unrestricted SAT
> >> scales polynomially). It is pretty obvious that you have little
> >> experience with visual data.
> >>
> >> This is an example of a blatant overgeneralization being declared as
> >> if  it were a factual statement. I can't casually explain why visual
> >> analysis is at least exponentially difficult because I am not enough
> >> of an expert to be that familiar with all the problems. However, I am
> >> confident that there is no overwhelming evidence to suggest that, in
> >> general, it is less difficult.
> >> Jim Bromer
> >>
> >>
> >> On Thu, Mar 5, 2015 at 1:04 PM, Matt Mahoney via AGI <[email protected]> 
> >> wrote:
> >>> On Wed, Mar 4, 2015 at 2:51 AM, Jim Bromer <[email protected]> wrote:
> >>>>  On Tue, Feb 17, 2015 at 11:52 PM, Matt Mahoney via AGI 
> >>>> <[email protected]> wrote:
> >>>>> On Tue, Feb 17, 2015 at 10:26 PM, Jim Bromer via AGI <[email protected]> 
> >>>>> wrote:
> >>>>>> I started wondering about how a good Satisfiability model might be
> >>>>>> used with AGI.
> >>>>>
> >>>>> It wouldn't because the hard problems in AI like vision and language
> >>>>> are not NP-hard. The more useful application would be breaking nearly
> >>>>> all forms of cryptography. (One time pad would still be secure).
> >>>>> -- Matt Mahoney
> >>>>
> >>>> I seriously doubt the premise that the hard problems like vision and
> >>>> language in AI are not NP-hard.
> >>>
> >>> NP-hard means NP-complete or harder. NP-complete means that a solution
> >>> would solve any problem in NP. NP is the class of problems whose
> >>> answers can be verified in time that is a polynomial function of the
> >>> input size. P is the class of problems that can be solved in
> >>> polynomial time. It is widely believed by everyone except Jim Bromer
> >>> that P != NP. This belief is not because of any proof, but because
> >>> thousands of other people like Jim Bromer who believed P = NP failed
> >>> to find polynomial time solutions to any NP-complete problems after
> >>> years of effort until they were convinced they would be better off if
> >>> they gave up. The time it takes to give up is inversely proportional
> >>> to the person's efforts into studying the math and researching the
> >>> work of others instead of repeating their mistakes.
> >>>
> >>>> My (admittedly limited) experience
> >>>> with visual AI ran up against NP-Hard solutions that I thought would
> >>>> work.
> >>>
> >>> Vision is a pattern recognition problem. You input a picture of a cat
> >>> and output a label like "cat". It is not NP-complete because (1)
> >>> experimentally, the problem scales polynomially with input size and
> >>> (2) the time to verify that a label like "cat" is correct is about the
> >>> same as the time it takes to label the image. Thus, the problem is in
> >>> P and would not benefit even if P = NP.
> >>>
> >>>> And since language could be considered to be a form of
> >>>> cryptography then your conjunction of cases (not language but
> >>>> cryptography) does not look really strong.
> >>>
> >>> No, language is also a pattern recognition problem.
> >>>
> >>>> (Visual processing also
> >>>> might be considered to be a form of cryptography and indeed it is used
> >>>> as such in captchas.)
> >>>
> >>> Cryptography depends on the existence of one-way functions: given
> >>> function f and output f(x), you can't find input x any faster than
> >>> trying all possible values and comparing the outputs. If P = NP, then
> >>> one-way functions would not exist. You could build a circuit that
> >>> computes f and compares the output. Then set the bits of x one at a
> >>> time and ask your polynomial SAT solver if a solution exists. If not,
> >>> flip the bit before going to the next bit.
> >>>
> >>> You could argue that a captcha is a one way function. It is easy to
> >>> convert text to an image, but hard to convert it back. But it is
> >>> polynomially hard, not exponentially hard. Adding one bit to the image
> >>> doesn't double the solution time, like adding one bit to an encryption
> >>> key would.
> >>>
> >>> --
> >>> -- Matt Mahoney, [email protected]
> >>>
> >>>
> >>> -------------------------------------------
> >>> AGI
> >>> Archives: https://www.listbox.com/member/archive/303/=now
> >>> RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> >>> Modify Your Subscription: https://www.listbox.com/member/?&;
> >>> Powered by Listbox: http://www.listbox.com
> >>
> >>
> >> -------------------------------------------
> >> AGI
> >> Archives: https://www.listbox.com/member/archive/303/=now
> >> RSS Feed: https://www.listbox.com/member/archive/rss/303/3701026-786a0853
> >> Modify Your Subscription: https://www.listbox.com/member/?&;
> >> Powered by Listbox: http://www.listbox.com
> >
> >
> >
> > --
> > -- Matt Mahoney, [email protected]
> >
> >
> > -------------------------------------------
> > AGI
> > Archives: https://www.listbox.com/member/archive/303/=now
> > RSS Feed: https://www.listbox.com/member/archive/rss/303/24379807-653794b5
> > Modify Your Subscription: https://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> 
> 
> -------------------------------------------
> AGI
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/26941503-0abb15dc
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
                                          


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to