James, I have to say, this is very interesting, and unless I'm very
much mistaken, I'm not alone in flipping through my entry level
chemistry works looking bibliographic references to chemical
engineering references to beg/borrow/steal.

But before I run out and start reading, I want to ask your opinion, in
the context of building messy but accurate models.

At a2i2 we are heavily results oriented, and in many cases we will
leave areas in our theories blank rather than fill them with bad
guesses. We tend to build our theory by adding partial or limited
solutions to the percieved space onto our system, seeing the results
of that, and using the percieved interaction as a metric for
determining both our theory of that space, our estimation of the
important of that space, as well as the content of our solution.

In the framework you have extracted from your experience in Chemical
Engineering and messy problems in Computer Science, is there a process
for taking messy or incomplete theories and updating them with partial
or approximate experimental data? The biggest problem we have with
this is that interactions between theoretical parts and experimental
parts isn't really mappable. So as we get results in on a particular
subsystem, it's hard to know how/if it can be added to the results
from others, as well as our theoretical grounding, without physically
doing the integration to the greatest extent possible.

It seems to me (havng only first year textbook kind of knowledge of
chemical engineering), that there may be in fact good amounts of
experience with this kind of problem, that some reactions would be
well documented, and others calculated, theoretical, or approximate
within the same projected process.

> As a chemical engineering student, you are taught to take absolutely
> every fact, equation, and expected result you can think of or discover
> before starting with the reduction to quasi-determinism.  As a
> heuristic, the more information and weird edge cases you put into the
> system, the better the predictive quality of the final product.  

This doesn't strike me as exactly the same thing Minsky is proposing.
He's not just proposing taking all information into account, he's
actually saying that we should attempt to integrate lots of /theories/
of the mind out of fear of leaving something out. I understand that
fear, but it seems the wrong approach to solve it. Models are not
facts to be included.


> lot of people get the impression that this leads to information
> overload, but there are established algorithms and techniques to
> systematically reduce all this complexity and one finds with experience
> that by throwing *more* stuff into the pile, it is to actually easier
> to build a good model and makes resolving inconsistencies and
> uncertainties to reasonable or adequate values easier.  More
> information tends to produce superior models via induction.

Certainly though, some increasing deviation must be systemic to
increasing model complexity, as the interactions begin having
multiplicative or propagating effects. Does chemical engineering have
a clean way of chaining uncertainties proportional to known effects or
chemical domains?

>  From this perspective, one cannot build an accurate "simple" model
> unless one starts with a very complex model that includes everything
> remotely related to the system in question no matter how irrelevant
> seeming.  As often happens in chemical engineering systems, the
> exclusion of equations and facts to make a simpler initial model very
> frequently reduces the quality and accuracy of the simplified model.
> Furthermore, there is no theoretical justification for exclusion
> because that is predicated on having a good model of the system in the
> first place. 

How do you resolve this with the apparent success of top-down
reduction in simplicity via things like architectural design, or even
strong OO programming structure? These seem to me to be points against
the superiority of the messy model in all complex domains. Or are
there costs and workarounds I'm not aware of in these areas, relative
to the messy model?

> Yet virtually every approach to AGI design (and much
> software design in  general) is based on starting with a simple model
> and building upward to a "correct" model, or starting with complexity
> and not even bothering to properly reduce it, very questionable methods
> for solving non-toy system models that contain uncertainties or are
> incomplete.  A problem with this is that you usually end up at a
> different "correct" solution working from the simple model upward than
> if you start with a complex model and work downward, and the downward
> model will almost always be more accurate if developed by someone
> accustomed to working that way.  Unfortunately, system reduction and
> modeling algorithms and heuristics of the kind used in chemical
> engineering do not seem to be taught in computer science even though
> they have always been eminently relevant as far as I could tell.

This is very interesting, and I'm ashamed to say that I have to read a
lot more before I can comment on this discrepency. Do you see any
other fields that have similar approaches to this kind of modelling
accuracy? The only other examples I can think of are things like
uncertain game theory (risk under uncertainty), and verifier theory
(the science of science, particularly knowledge domains), neither of
which seem to have associated clean (or at least well defined) math
for the integration.

I don't know that I could actually use the math, but it would be nice,
as it tends to shakeout any serious inconsistencies, and show the
ranges and bounds of the stated relationships.

thanks,

-- 
Justin Corwin
[EMAIL PROTECTED]
http://outlawpoet.blogspot.com
http://www.adaptiveai.com

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to