On Wednesday, 8 March 2017 at 14:50:18 UTC, XavierAP wrote:
On Wednesday, 8 March 2017 at 14:02:40 UTC, Moritz Maxeiner wrote:
[...]

This is true for controlled experiments like the one I pointed to and this model works fine for those sciences where controlled experiments are applicable (e.g. physics). For (soft) sciences where human behaviour is a factor - and it usually is one you cannot reliably control - using quasi-experiments with a high sample size is a generally accepted practice to accumulate empirical data.

Right, but that's why "soft" sciences that use any "soft" version of the empirical method, have no true claim to being actual sciences.

That is an opinion, though; same as my initial position that enough empirical data about whether people in memory safe languages (but where your safe code can call hidden unsafe code without you knowing it) actually end up creating memory safe programs could provide enough foundation to exclaim "I told you so" if it turns out that the discrepancy is significant enough (what significant means in this context is, of course, another opinion).

And it's why whenever you don't like an economist's opinion, you can easily find another with the opposite opinion and his own model.

I'm not an economist and can neither speak to the assumptions in this, nor the conclusion.


There are other sane approaches for "soft" sciences where (controlled) experiments aren't possible:

https://en.wikipedia.org/wiki/Praxeology#Origin_and_etymology

Of course these methods have limits on what can be inferred, whereas with models tuned onto garbage historical statistics you can keep publishing to scientific journals forever, and never reach any incontestable conclusion.

Thank you, I'll put praxeology on my list of things to read up on.

Reply via email to