Aaron Hosford <[email protected]> wrote:
They most definitely are not new ideas. But what I think has been missing
is a willingness to allow the system to be *occasionally *wrong in exchange
for the algorithms becoming tractable. I think there is a strong tendency
among most researchers to go for 100% correctness, and then the systems
they design or build grind to a halt as soon as more complex problems are
encountered.
---------------------------------------------------

I think that there are two problems with this.  One is that a lot of very
basic information has to be very precise in order for the structure of a
'insight' (whatever you want to call it) to hold up.  Although you can
think of most ideas that only have to be approximately correct or which can
be quite imaginative, the knowledge on which those ideas are built upon are
usually pretty substantial.  It could be that computers are now powerful
enough to build on weak or insubstantial foundations but I kind of doubt
it.  For example, I could have made a number of errors of judgement in this
paragraph but the foundation upon which my ideas are built upon seems
insightful.  So maybe there is not a lot of basic information that has to
be very precise (that seems hard to believe) but the reasons why I that
kind of comment might be true seems substantial.  So even if it somehow
turned out that I was wrong, the broad failures to create AGI seems to
uphold that as a reasonable hypothesis.  So you might determine that the
hypothesis was wrong, but it would be more difficult to determine that the
basis for the hypothesis was not appropriate (even if not correct).

The second problem is that there are often many ways to solve a problem and
but it is difficult to represent that except using a piecemeal approach.
When you combine many different components each which may have many
different paths to solution (or to being used effectively) all of the
sudden the different aspects of the different paths can become crucial.  So
even if there are many different ways to do 'one' thing, when you have to
do many things and those things are heavily interdependent, the substantial
lack of way to collect the information in a way that can be effectively
coordinated for different kinds of situations will impact the effectiveness
of the knowledge.

Jim Bromer


On Mon, Nov 12, 2012 at 3:41 PM, Aaron Hosford <[email protected]> wrote:

> They most definitely are not new ideas. But what I think has been missing
> is a willingness to allow the system to be *occasionally *wrong in
> exchange for the algorithms becoming tractable. I think there is a strong
> tendency among most researchers to go for 100% correctness, and then the
> systems they design or build grind to a halt as soon as more complex
> problems are encountered.
>
> Utility is a function of both correctness and completion. If completion is
> 0, utility is 0. A system that completes and returns a suboptimal result,
> or which returns answers which are probably but not always correct, has
> nonzero utility. This is an improvement. Heuristics are one way to soften
> the problem, allowing occasional error in favor of something that actually
> responds in real time. Stochastic approaches are another.
>
> We humans aren't always right. We don't search every combination for the
> optimal one. We use heuristics and take the best of the first few things we
> think of, calling it good enough. Then, later, while we're putting our
> "good enough" solution to use, we come up with improvements as part of a
> background optimization process -- practice makes perfect. But we never
> actually get to perfection. We just get the job done. Any new insights
> along the way get incorporated into the future strategy. The key is in this
> "ratcheting" process, where progress is held on to. Even random variation
> in strategy results in improvement over time when combined with such a
> ratcheting process.
>
> I have played around a bit with theorem provers. One of the things that
> always struck me is that every single theorem prover I have encountered
> worked on deterministic principles, plodding through every single
> combination of possibilities until it eventually produced (given enough
> time/memory) a final verdict which was absolute. If the problem presented
> is too difficult, the system suffers from combinatorics and effectively
> freezes, attempting to spin its wheels for longer than the lifetime of the
> universe.
>
> This is clearly not how human beings do it. A person sees some sort of
> pattern to the problem which helps to guide the choice of an arbitrary
> theorem or technique. When a blind alley is detected (by way of extensive
> work to no avail) a new approach is selected based on additional
> information gleaned from the previous one. When the answer is found, the
> proof that was constructed guarantees the answer is correct, but this is
> the only absolute guarantee we receive. If the person can't find the proof,
> it's not because every possibility has been tried, and so we can't say that
> it is unprovable with respect to the assumptions. If the person does find a
> proof, there's no guarantee that there isn't a shorter one. But once
> someone has found a proof, the person remembers and shares it, and the work
> never has to be done again.
>
> Why hasn't anyone (to my knowledge) made a theorem prover that works on
> the same stochastic/heuristic principles as human problem solving? No, you
> can't count on such a theorem prover to find a proof reliably. But isn't a
> chance at occasionally stumbling into a solution better than reliably
> failing to find one? The same goes for other attempts at artificial
> reasoning that I've seen, even when they don't rely on the under-powered
> premises of formal logic. Combinatorics are a plague for deterministic,
> full search algorithms. They are not so severe for stochastic/heuristic
> ones, because suboptimal is acceptable.
>
>
>
> On Sun, Nov 11, 2012 at 7:06 PM, Jim Bromer <[email protected]> wrote:
>
>> But, if avoiding explosive combinatorics could be avoided in the first
>> place, it would have been avoided in the first place. I just think that it
>> is too familiar a plan to take seriously. Perhaps earlier models were not
>> developed very far because their computers were too primitive and the
>> technique might be workable today. But I suspect that if that were so it
>> would be easy to demonstrate. If someone could develop a strategy like that
>> and make it work then we could make it work with something simple that
>> became progressively more complicated as it was used.
>> The use of heuristics and divided and conquer strategies are not new
>> ideas and you haven't offered something new that convinces me that it is
>> workable.
>> Jim Bromer
>>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968
Powered by Listbox: http://www.listbox.com

Reply via email to