If you want to work on a serious problem...

Consider what I call the abstraction problem.
We know that a mark of intelligence is the ability to make a better choice.

This quickly leads to the issue of forecasting, or modeling, or whatever other means that might be used to estimate the outcomes of given choices. An example of other methods to determining probable outcomes: use information that comes from "others," or what you might call others opinions. The opinion method quickly encounters the problem of determining whose "opinion" to value... This "class" of problem falls into what I call the abstraction problem.

Why abstract? (think more like Picasso stepping away from realism, than a summary of a technical paper)

Use the word "opportunity" to represent a given choice. We can say that at any given moment the "unit" has multiple opportunity to consider and compare. By 'my definition' of intelligence, it would be intelligent to choose the opportunity which produces the best outcome (maximum benefit as least cost with high probability.) (Drum-roll....) A "big" issue/problem in the field of AGI, is the way to measure and compare benefit (and risk and probability.)

This is where abstraction comes in. There must be some mechanism that can "reduce" all aspects of a benefit and come up with a number. Benefit A reduces to number 71 and Benefit B reduces to number 58. Oh, then for decision making purposes, I will prefer benefit A over B. This doesn't mean I can make a good choice yet, as there are risk and circumstances to consider. Applying the same thinking, the risk will have an "abstract" value that will weigh against the benefit. Circumstances might be the "easy" factor to bring into our units calculations, but there is also the "unknown" that a circumstance may not materialize as expected.


I think you get the picture. The reduction to abstraction is not an unsolvable problem, but it is likely to be handled more with rules of thumb than by super accurate simulation. Simulations are great for flight simulators, but the AGI by definition is operating in something much more varied than the atmosphere encountered by airplanes.

I am interested in AGI, but as I've stated before, I'm not a believer in super intelligence. Building machines that use various "rule of thumb" to come up with interesting results fascinates me. Okay, I'll admit, an AGI may be slightly better than human performance, but that is orders of magnitude from "SUPER INTELLIGENCE." I digress...

Bottom line, there are systems coming on-line that use clever rules of thumb and produce novel responses - but they are more likely the result of abstraction being programmed in by human insight. The abstraction problem could be attacked from many angles, but so far I've seen little in the way of a direct effort to make an "abstractor." Humans do it all the time, so it is not impossible.

Stan







On 03/23/2017 05:04 PM, Mike Archbold wrote:
That sounds like about like a requirements list, if you just consider
the requirements either not met or not integrated.  Ben G, Peter Voss,
and a few others I know have pretty good lists like that around.

On 3/23/17, Ben Kapp <[email protected]> wrote:
I wonder if we maybe able to compile a (nearly) exhaustive list of problems
which exist in AGI research currently.  Such a list could be of use for
(among other things) directing people to the kinds of problems we need
solved.



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/9320387-ea529a81
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to