Steve,

I know what dimensional analysis is, but it would be great if you could give
an example of how it's useful for everyday commonsense reasoning such as,
say, a service robot might need to do to figure out how to clean a house...

thx
ben

On Sun, Jun 27, 2010 at 6:43 PM, Steve Richfield
<steve.richfi...@gmail.com>wrote:

> Ben,
>
> What I saw as my central thesis is that propagating carefully conceived
> dimensionality information along with classical "information" could greatly
> improve the cognitive process, by FORCING reasonable physics WITHOUT having
> to "understand" (by present concepts of what "understanding" means) physics.
> Hutter was just a foil to explain my thought. Note again my comments
> regarding how physicists and astronomers "understand" some processes though
> "dimensional analysis" that involves NONE of the sorts of "understanding"
> that you might think necessary, yet can predictably come up with the right
> answers.
>
> Are you up on the basics of dimensional analysis? The reality is that it is
> quite imperfect, but is often able to yield a short list of "answers", with
> the correct one being somewhere in the list. Usually, the wrong answers are
> wildly wrong (they are probably computing something, but NOT what you might
> be interested in), and are hence easily eliminated. I suspect that neurons
> might be doing much the same, as could formulaic implementations like (most)
> present AGI efforts. This might explain "natural architecture" and guide
> human architectural efforts.
>
> In short, instead of a "pot of neurons", we might instead have a pot of
> dozens of types of neurons that each have their own complex rules regarding
> what other types of neurons they can connect to, and how they process
> information. "Architecture" might involve deciding how many of each type to
> provide, and what types to put adjacent to what other types, rather than the
> more detailed concept now usually thought to exist.
>
> Thanks for helping me wring my thought out here.
>
> Steve
> =============
> On Sun, Jun 27, 2010 at 2:49 PM, Ben Goertzel <b...@goertzel.org> wrote:
>
>>
>> Hi Steve,
>>
>> A few comments...
>>
>> 1)
>> Nobody is trying to implement Hutter's AIXI design, it's a mathematical
>> design intended as a "proof of principle"
>>
>> 2)
>> Within Hutter's framework, one calculates the shortest program that
>> explains the data, where "shortest" is measured on Turing  machine M.
>> Given a sufficient number of observations, the choice of M doesn't matter
>> and AIXI will eventually learn any computable reward pattern.  However,
>> choosing the right M can greatly accelerate learning.  In the case of a
>> physical AGI system, choosing M to incorporate the correct laws of physics
>> would obviously accelerate learning considerably.
>>
>> 3)
>> Many AGI designs try to incorporate prior understanding of the structure &
>> properties of the physical world, in various ways.  I have a whole chapter
>> on this in my forthcoming book on OpenCog....  E.g. OpenCog's design
>> includes a physics-engine, which is used directly and to aid with
>> inferential extrapolations...
>>
>> So I agree with most of your points, but I don't find them original except
>> in phrasing ;)
>>
>> ... ben
>>
>>
>> On Sun, Jun 27, 2010 at 2:30 PM, Steve Richfield <
>> steve.richfi...@gmail.com> wrote:
>>
>>> Ben, et al,
>>>
>>> *I think I may finally grok the fundamental misdirection that current
>>> AGI thinking has taken!
>>>
>>> *This is a bit subtle, and hence subject to misunderstanding. Therefore
>>> I will first attempt to explain what I see, WITHOUT so much trying to
>>> convince you (or anyone) that it is necessarily correct. Once I convey my
>>> vision, then let the chips fall where they may.
>>>
>>> On Sun, Jun 27, 2010 at 6:35 AM, Ben Goertzel <b...@goertzel.org> wrote:
>>>
>>>> Hutter's AIXI for instance works [very roughly speaking] by choosing the
>>>> most compact program that, based on historical data, would have yielded
>>>> maximum reward
>>>>
>>>
>>> ... and there it is! What did I see?
>>>
>>> Example applicable to the lengthy following discussion:
>>> 1 - 2
>>> 2 - 2
>>> 3 - 2
>>> 4 - 2
>>> 5 - ?
>>> What is "?".
>>>
>>> Now, I'll tell you that the left column represents the distance along a
>>> 4.5 unit long table, and the right column represents the distance above the
>>> floor that you will be as your walk the length of the table. Knowing this,
>>> without ANY supporting physical experience, I would guess "?" to be zero, or
>>> maybe a little more if I were to step off of the table and land onto
>>> something lower, like the shoes that I left there.
>>>
>>> In an imaginary world where a GI boots up with a complete understanding
>>> of physics, etc., we wouldn't prefer the simplest "program" at all, but
>>> rather the simplest representation of the real world that is not
>>> physics/math *in*consistent with our observations. All observations
>>> would be presumed to be consistent with the response curves of our sensors,
>>> showing a world in which Newton's laws prevail, etc. Armed with these
>>> presumptions, our physics-complete AGI would look for the simplest set of
>>> *UN*observed phenomena that explained the observed phenomena. This
>>> theory of a physics-complete AGI seems undeniable, but of course, we are NOT
>>> born physics-complete - or are we?!
>>>
>>> This all comes down to the limits of representational math. At great risk
>>> of hand-waving on a keyboard, I'll try to explain by pseudo-translating the
>>> concepts into NN/AGI terms.
>>>
>>> We all know about layering and columns in neural systems, and understand
>>> Bayesian math. However, let's dig a little deeper into exactly what is being
>>> represented by the "outputs" (or "terms" for died-in-the-wool AGIers). All
>>> physical quantities are well known to have value, significance, and
>>> dimensionality. Neurons/Terms (N/T) could easily be protein-tagged as to the
>>> dimensionality that their functionality is capable of producing, so that
>>> only compatible N/Ts could connect to them. However, let's dig a little
>>> deeper into "dimensionality"
>>>
>>> Physicists think we live in an MKS (Meters, Kilograms, Seconds) world,
>>> and that all dimensionality can be reduced to MKS. For physics purposes they
>>> may be right (see challenge below), but maybe for information processing
>>> purposes, they are missing some important things.
>>>
>>> *Challenge to MKS:* Note that some physicists and most astronomers
>>> utilize "*dimensional analysis*" where they experimentally play with the
>>> dimensions of observations to inductively find manipulations that would
>>> yield the dimensions of unobservable quantities, e.g. the mass of a star,
>>> and then run the numbers through the same manipulation to see if the results
>>> at least have the right exponent. However, many/most such manipulations
>>> produce nonsense, so they simply use this technique to jump from
>>> observations to a list of prospective results with wildly different
>>> exponents, and discard the results with the ridiculous exponents to find the
>>> correct result. The frequent failures of this process indirectly
>>> demonstrates that there is more to dimensionality (and hence physics) than
>>> just MKS. Let's accept that, and presume that neurons must have already
>>> dealt with whatever is missing from current thought.
>>>
>>> Consider, there is some (hopefully finite) set of reasonable
>>> manipulations that could be done to Bayesian measures, with the various
>>> competing theories of recognition representing part of that set. The
>>> reasonable mathematics to perform on spacial features is probably different
>>> than the reasonable mathematics to perform on recognized objects, or the
>>> recognition of impossible observations, the manipulation of ideas, etc.
>>> Hence, N/Ts could also be tagged for this deeper level of dimensionality, so
>>> that ideas don't get mixed up with spacial features, etc.
>>>
>>> Note that we may not have perfected this process, and further, that this
>>> process need not be perfected. Somewhere around the age of 12, many of our
>>> neurons DIE. Perhaps these were just the victims of insufficiently precise
>>> dimensional tagging?
>>>
>>> Once things can ONLY connect up in mathematically reasonable ways, what
>>> remains between a newborn and a physics-complete AGI? Obviously, the
>>> physics, which can be quite different on land than in the water. Hence, the
>>> physics must also be learned.
>>>
>>> My point here is that if we impose a fragile requirement for mathematical
>>> correctness against a developing system of physics and REJECT simplistic
>>> explanations (not observations) that would violate either the mathematics or
>>> the physics, then we don't end up with overly simplistic and useless
>>> "programs", but rather we find more complex explanations that are physics
>>> and mathematically believable.
>>>
>>> we should REJECT the concept of "pattern matching" UNLESS the discovered
>>> pattern is both physics and mathematically correct. In short, the next
>>> number in the "2, 2, 2, 2, ?" example sequence would *obviously* (by
>>> this methodology) not be "2".
>>>
>>> OK, the BIG question here is whether a carefully-designed (or evolved
>>> over 100 million years) system of representation can FORCE the construction
>>> of systems (like us) that work this way, so that our "programs" aren't
>>> "simple" at all, but rather are maximally correct?
>>>
>>> Anyway, I hope you grok the question above, and agree that the search for
>>> the simplest "program" (without every possible reasonable physics and math
>>> constraint that can be found) may be a considerable misdirection. Once you
>>> impose physics and math constraints, which could potentially be done with
>>> simplistic real-world mechanisms like protein tagging in neurons, the
>>> problems then shifts to finding ANY solution that fits the complex
>>> constraints, rather than finding the SIMPLEST solution without such
>>> constraints.
>>>
>>> Once we can get past the questions, hopefully we can discuss prospective
>>> answers.
>>>
>>> Are we in agreement here?
>>>
>>> Any thoughts?
>>>
>>> Steve
>>>
>>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/> | 
>>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Ben Goertzel, PhD
>> CEO, Novamente LLC and Biomind LLC
>> CTO, Genescient Corp
>> Vice Chairman, Humanity+
>> Advisor, Singularity University and Singularity Institute
>> External Research Professor, Xiamen University, China
>> b...@goertzel.org
>>
>> "
>> “When nothing seems to help, I go look at a stonecutter hammering away at
>> his rock, perhaps a hundred times without as much as a crack showing in it.
>> Yet at the hundred and first blow it will split in two, and I know it was
>> not that blow that did it, but all that had gone before.”
>>
>>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com>
>>
>
>    *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
CTO, Genescient Corp
Vice Chairman, Humanity+
Advisor, Singularity University and Singularity Institute
External Research Professor, Xiamen University, China
b...@goertzel.org

"
“When nothing seems to help, I go look at a stonecutter hammering away at
his rock, perhaps a hundred times without as much as a crack showing in it.
Yet at the hundred and first blow it will split in two, and I know it was
not that blow that did it, but all that had gone before.”



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to