By the way, This paragraph from Graham's essay, and in fact, his constant reiteration of it in most of his work, is perhaps the most under-rated idea that we have in the programming industry. It's actually not just the programming industry... My emphasis added:
You can magnify the effect of a powerful language by using a style called bottom-up programming, where you write programs in multiple layers, the lower ones acting as programming languages for those above. If you do this right, you only have to keep the topmost layer in your head. This isn't just a style - it's programming to a micro-interface, and programming in extremely tiny chunks... it's also what FONC seem to be doing with the idea of POLs and Ometa translating between them. The interface *is* the micro-language... what's inside the interface is simply the implementation of the micro-language. (or POL, if you like). One technique I use that is particularly helpful is naming my "variables" really long descriptive names. Effectively I use variable names as comments. But this is just because I program in languages that don't support a visual combining of infinitely recursive sub-languages. The LISPS apparently support this according to Graham, but in the end when I program in LISP I sill end up writing files of text, often using arcane symbols that feels like I'm a fyre-weilding mage from yester-epoch. That feels like an epic fail to me. Julian On 09/05/2012, at 2:20 AM, Jarek Rzeszótko wrote: > Natural languages are commonly much more ambiguous and you could say "fuzzy" > (as in fuzzy logic) than (currently popular) programming languages and hence > switching between those two has to cause some difficulties. > > Example: I have been programming in Ruby for 7 years now, for 5 years > professionally, and yet when I face a really difficult problem the best way > still turns out to be to write out a basic outline of the overall algorithm > in pseudo-code. It might be a personal thing, but for me there are just too > many irrelevant details to keep in mind when trying to solve a complex > problem using a programming language right from the start. I cannot think of > classes, method names, arguments etc. until I get a basic idea of how the > given computation should work like on a very high level (and with the > low-level details staying "fuzzy"). I know there are people who feel the same > way, there was an interesting essay from Paul Graham followed by a very > interesting comment on MetaFilter about this: > > http://www.paulgraham.com/head.html > http://www.metafilter.com/64094/its-only-when-you-have-your-code-in-your-head-that-you-really-understand-the-problem#1810690 > > There is also the Pseudo-code Programming Process from Steve McConnell and > his "Code Complete": > > http://www.coderookie.com/2006/tutorial/the-pseudocode-programming-process/ > > Another thing is that the code tends to evolve quite rapidly as the > constraints of a given problem are explored. Plenty of things in almost any > program end up being the way they are because of those constraints that > frequently were not obvious in the start and might not be obvious from just > reading the code - that's why people often rush to do a complete rewrite of a > program just to run into the same problems they had with the original one. > The question now is how much more time would documenting those constraints in > the code take and how much time would it save with future maintenance of the > code. I guess the amount of this context that would be beneficial varies with > applications a lot. > > If you mention TeX, I think literate programming is pretty relevant to this > discussion too, and I am personally looking forward to trying it out one day. > Knuth himself said he would not be able to write TeX without literate > programming, and the technique is of course partially related to what I've > said above regarding pseudocode: > > http://www.literateprogramming.com/ > > Cheers, > Jarosław Rzeszótko > > > 2012/5/8 David Goehrig <[email protected]> > > On May 8, 2012, at 2:56 AM, Julian Leviston <[email protected]> wrote: > > > > > Humans parsing documents without proper definitions are like coders trying > > to read programming languages that have no comments > > One of the under appreciated aspects of system like TeX with the ability to > do embedded programming, or a system like Self with its Annotations as part > of the object, or even python's .__doc__ attributes is that they provide > context for the programmer. > > A large part of the reason that these are under appreciated is that most > programs aren't sufficiently well factored to take advantage of these > capabilities. As a human description of what the code does and why will > invariably take about a paragraph of human text per line of code, a 20 line > function requires a pamphlet of documentation to provide sufficient context. > > Higher order functions, objects, actors, complex messaging topologies, > exception handling (and all manner of related nonlocal exits), and the like > only compound the context problem as they are "non-obvious". Most of the FP > movement is a reaction against "non-obvious" programming. Ideally this would > result in a positive "self-evident" model, but in the real world we end up > with Haskell monads (non-obvious functional programming). > > In the end the practical art is to express your code in such a way as the > interpretation of the written word and the effective semantics of the program > are congruent. Or in human terms "you say what you mean, and the program does > what it says". I have a code sample I use in programming interviews which > reads effectively > > function(name) { > method = this[name] > return method.apply(this,arguments.after(0)); > } > > And the question I typically ask, after showing the definitions of after I > ask the simple question if I call this function as > > fn('add', 1, 2) > > What is the value of arguments.after(0) > > In about 2 out of 25 interviews for senior level devs I get the right answer. > For almost all non-programmers I've talked to, with the little bit of > context "programmers often start counting from 0" they get the right answer, > without having read the formal definition in the base language 2 lines > earlier. What I've learned in asking this question in interviews is that the > context one carries with them often colors their interpretation of the > question. Usually 5 out of 25 will be confused because their favorite > framework defines "after" to mean something else entirely, and can't grok the > new contextual definition. > > The interesting bit is the other 18 people who either fail to answer the > question entirely, don't know how functions pass arguments, or come up with > bizarrely wrong answers. Usually these 18 fail because they can not interpret > what the program does in the specific context of a function call. They don't > have a model of the state machine in their head. No amount of formal > definition will let them process that information. These programmers get by > through cribbing and trial and error. As one described his methodology: "I > feed it inputs until I get what looks like the right answer". > > For these people precise definitions, formal language, clever idioms, or > finely tuned mathematical constructs do not matter, because they flip burgers > with more care. And therein lies the crux of the issue, we may be smart > enough to understand these machines, but the majority of people working in > industry do not. And the programmers who become managers at large firms > choose obtuse, inexpressive, cumbersome languages like Java, because they're > hiring those 23 I'm turning down. > _______________________________________________ > fonc mailing list > [email protected] > http://vpri.org/mailman/listinfo/fonc > > _______________________________________________ > fonc mailing list > [email protected] > http://vpri.org/mailman/listinfo/fonc
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
