Thus spake Marcus G. Daniels circa 09-09-16 10:39 AM: > If the symbols of a model > aren't anywhere close to grounded, almost any proposition could be true > or false. It could be that some things are more or less likely, but > figuring that out soon becomes a huge computational/cognitive load.
Well, the symbols in such a model _are_ grounded to the person constructing and using the model. So, as a thinking tool, there's no danger at all. The danger comes in when that person makes the mistake of believing that what they think is somehow real. Besides, we can say the exact same thing about models grounded to the vernacular. Just because a bunch of people use the same terms in, seemingly, the same way does NOT imply that those terms are any more grounded than the private terms inside one person's mind. In fact, because those terms are aggregate abstractions, they are _less_ well-grounded than personal terms (because grounding comes from having fingers, toes, tongues, eyes, etc.). The danger of misunderstanding and confusion is much higher when using the vernacular because it's more tempting to think that, because you speak the way others do, you're all somehow _right_ about whatever you're talking about. It's easier to be tricked into thinking a falsehood is true if _lots_ of people share in the falsehood ... another typical trait of organized religion. Using your own private models and forcing yourself to continually map your lexicon to others' is a great way of ensuring you don't fall into the trap of "consensus reality" and justificationism. Really, it's 6 to one 1/2 a dozen to the other. Both are untrustworthy and that's why the success of science is based on _behavior_ not words. -- glen e. p. ropella, 971-222-9095, http://agent-based-modeling.com ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
