John Pote @ Thu, Apr 24, 2008 at 10:44 PM: > If we are ever to have bug free code what ever we write as geniouses must be > understandable by everyone else who extends or bug fixes the programme - not > everyone is a genious, there are some coders and programmers who are below > average (actually 50% of them depending on your definition of average).
There were Pentium-60 and 4Megs of RAM compilers worked, now there are AMD64 and more than 4Gigs of ram, same compilers are more bloated, bigger, slower and buggy, i.e still we must be glad that they are actually work. All what they try is, to do optimization of the code. Gee, i'd say hardware guys did all that for you already, but programmers and vista-like toasting corporations (remember `xblill` game?) do their crappy software all over the place. I cannot see really new applications on desktop now and then. Even servers with more ram now handle Internet more efficiently, that it was 10 years ago (no new connection for every URL in browser for example). More frequency and CPU power is just idle 80-90% of time, expensive high tech toasters. Not it's not saying about whole advances, like HDD speed, cheap RAM caches, etc. I must say also, that i've tried to apply text processing techniques, to ordinary codebase. Source code is text, yet many programmers (even above average), don't use tools other than text editor (just typing) and compiler (run, rerun). * automatic processing of C code base: http://kerneltrap.org/node/15894 (problems: RE-friendly coding style, understanding of tools by developers) * particularly, making pure textual annotations of code (no heuristics or AI) about what code does, and then to check match of key words with key function calls or or other functionality: static security audit, API usage checks, etc, more easy porting and patching. I.e. text-based functionality semantic patching (add function foo(), add param bar, remove buz, etc.), as opposed to line-oriented (`diff` + `patch` is still in due, when changes are too intrusive, but if programmer provides equivalent of whole function, not just `diff` to current one, then semantic patch can be used; still `diff` will be used to check automatically changes in upsteram codebase). http://kerneltrap.org/node/15996 (problems: programmers don't like or understand non trivial text processing tools; they didn't invent more comprehensive text-editors or even coding techniques above ground floor). What is called an industry, and what push industrial revolutions? Tools! Name, please new programming tools in last 30, 40 years? Thanks! > Jackson's first two rules on optimisation are 1) Don't do it and 2) Don't do > it yet. And I forget who first pointed out that the most productive form of > optimisation is to change the underlying algorithm to a more efficient one > rather than honing a less efficient algorithm. (But sometimes it just has to > be done when we already have the best algorithm). I have same problem with regular expressions (there is no alternative for them in general purpose text processing). When i asked about, why rule "to match longest" is hardwired in RE? Why i can't have simple '\{0,s\}' to match shortest one? I was told, that there's a standard for (basic) RE and matcher in glibc is extremely complex. I'm asking for simple thing, that adds more flexibility! I was asking about changes in the way `sed` (not RE matcher) can process text more easily and efficiently (example you've saw in prev. email). http://kerneltrap.org/node/16028 Yet this extremely complex RE matcher doesn't do simplest thing ever: http://bugs.debian.org/475474 After this, i have experimentally proved, that too complex doesn't mean good. And optimizations are different things to being simple. Creativity, imagination. Where they are? -- sed 'sed && sh + olecom = love' << '' -o--=O`C #oo'L O <___=E M
