Robert Howard wrote:
>
> The more “good” constraints (or boundaries) we adopt and practice in 
> our lives, which require wisdom and self discipline, the easier it 
> becomes to formally analyze our inventions and communicate them to 
> others for peer review.
>
Unless they are "bad" constraints and only limit one's imagination about 
what the way the ways the world actually might work. Of course, even 
machine language (of Von Neumann CPU type architectures) is a 
constraint. The question is not whether to accept some constraints 
(there is no practical choice in that), it's whether the constraints 
ones chooses turn out to be well matched to the thing being studied. 
It's not like there is always this clean line between the known and 
unknown. I think one must have a repertoire of approaches to try.

It would be nice if there was a clean line between the known and 
unknown. Then it would be an argument about software engineering.
The situation is more like a modeler thinks that that something acts 
within some range of parameters or plausible mechanisms and the details 
parameters and behaviors are arguable or need to be found. Nailing them 
down into some rigid type system won't necessarily help do that. It 
could even obscure insight.
> With ML, we can jump around all over the place, ignoring stack, types, 
> and other rules. It’s spaghetti city!
Do you expect a terrorist or virus to follow these nice rules and proper 
software engineering practice?
> The code becomes far more readable and easier for a reader to get the 
> authors idea.
Yes, if the author (or machine learning system) even has a model that 
works at all..

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to