[WARNING: RANT] Depending on which way you look at it, we either left the discipline of engineering a long time ago, or we never entered it.
Engineering is the practical application of knowledge of pure sciences (e.g., physics or chemistry) to real world problems. The challenge is that science deals with facts or truths systematically arranged and showing the operation of general laws. In the scientific process, we find what we think are the causes of successes and failures; we build hypotheses that model this behavior; we *METHODICALLY TEST THE HYPOTHESES"; and then we publish our findings for peer review. Mechanical Engineering ... Electrical Engineering ... Civil Engineering ... these all are formal disciplines where one may obtain a license stating that they know how to apply the principles of the domain of study. If you screw up and something fails, it comes back to you in full legal regalia. Software "engineering" has never been such, and while I do recall studying formulas, performing experiments, etc. back in school for things like database or graphics performance, there was no formal study of the way code is assembled. Anyone who could make a program that satisfied the criteria of the assignment got credit. Even worse, security "engineering" (it makes me shiver to type those words together) is so much horsehockey that it makes software "engineering" look like Gallileo's first born. So far, security "engineering" has largely been nothing more than an accumulation of tribal knowledge. "Oh I fixed that problem this way. How'd you do it?" We heap piles of "best practices" together and pretend that those are going to protect us long enough and well enough to stay in business (at least until I come up with the next one). Where is the formal discipline? Where is the traceable execution of deductive reasoning? Why are your set of "best practices" any better than mine? Given the exact same problem statement and requirements, how likely are two security "engineers" to come up with the same answer? LANGSEC is the *first* area of study I have seen that comes close to resembling the application of scientific principles to real-world problems in the software space. We can sit here and fuss and moan about how all these other engineering disciplines don't follow the right rules when writing code, but the bottom line is that THERE ARE NO RIGHT RULES FOR WRITING CODE. At least not yet, and not universally accepted. So, Ali... how do we get ahead? Where is the economic case? These questions are like asking what is the economic case for the Earth orbiting around the Sun vs. the heavens rotating around us. We keep seeking truth and talking about it in the process. Where this or that method or approach (i.e., hypothesis) holds up to rigorous and structured testing with repeatable and predictable results, we spread the gospel. This is a monumental problem. We don't solve it by flailing against the wall. We solve it by chipping away patiently and persistently. Truth is on our side. With any luck, our children's children may one day benefit from actual engineering principles applied to software. [/RANT] On Tue, Apr 29, 2014 at 5:47 AM, Ali-Reza Anghaie <a...@packetknife.com>wrote: > On Mon, Apr 28, 2014 at 11:02 AM, Derick Winkworth <ccie15...@gmail.com> > wrote: > > It's the antithesis of actual engineering. Data flow specs are long gone > > because there is an assumption that reachability will exist. People > install > > products with no idea what traffic is actually flowing in and out of the > > "box" (be that hardware or software). > > It's unfortunate that this behavior has been taking over all > engineering for some decades now though - I'm not sure it's IT > specific. Certainly the early decisions of major aerospace systems > like the F22/F119 and F35/F135 have played out this way too. > > > On Mon, Apr 28, 2014 at 8:24 AM, Meredith L. Patterson < > clonea...@gmail.com> > > wrote: > >> > >>> Do you consider accepting only a safe subset of a language a proper > >>> course of action? > >> > >> That was pretty much the exact point of Dejector, which defined a "safe > >> subset" of SQL as "the subset of rules necessary to generate the > queries the > >> developer expects the application to generate under correct operation." > > So we've gone application protocol proxying --> shallow packet > inspection --> DPI --> WAF --> etc. and collapse again. > > There seems to be an industry argument that to make "this" work it has > to be done at the perimeter and I think that's proven to be anything > from true (also near impossible). And we already know what happens > when we bring it all back closer to the data - it falls over again. > > So aren't you (or the "we") making the same argument we know won't > ever actually happen? > > This is a long winded way of saying I think we are still missing the > economic motivators of this (financial and resource). If we can't plot > the value of such engineered solutions against marketing, lawfare, PR, > etc. that all maintain market. Then what's the point for the consumer > (in this case developers, operations, and stability)? > > This isn't exclusive to IT - which is what worries me. Clearly > "everyone else" sees economics of it differently than we do - and > they're (so far) proven right. How do we get ahead of that since we > ~know~ they're eventually catastrophically wrong? > > -Ali > _______________________________________________ > langsec-discuss mailing list > langsec-discuss@mail.langsec.org > https://mail.langsec.org/cgi-bin/mailman/listinfo/langsec-discuss >
_______________________________________________ langsec-discuss mailing list langsec-discuss@mail.langsec.org https://mail.langsec.org/cgi-bin/mailman/listinfo/langsec-discuss