ERIC P. CHARLES wrote:
Yet, I also have the feeling that if they for one moment thought as I did, that they were (at best) just playing a strange prediction game, the whole enterprise would suddenly grind to a halt. Ah, the time and money that would be saved.

If it were easy to make reliable predictions from limited data using nothing but machine learning algorithms, and for many kinds of problems, I'd agree with that. But finding the right signals and models can be hard. Simply removing a personal psychological stake in the ontological status of the semantics doesn't necessarily make it any easier or harder. For example, one might not bother think about why a model works and fail to gain further important insights. On the other hand, investing in understanding lots of details of a model that doesn't work is also bad; a modeler should be prepared to avoid further similar work.

Marcus

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to