Three other ways of relating human abilities to language/notation design.
1. For a moment, put aside the fact that it's a human we're thinking about: how hard is it to extract needful information from a given structure? That is, how hard is it for _any_ device or being, assuming a limited set of abilities?
Most information designs require a lot of search to obtain the answers to certain sorts of questions. A good example (supported by empirical evidence) is the standard if-then construction, especially when deeply nested. That structure is good for answering the question "given these truth conditions, what will happen?" but bad for answering the question "This happened, what must the truth conditions have been?"
I believe that, in general, any information structure makes it easy to answer certain sorts of questions at the expense of making it harder to answer other sorts.
Corollary: there's no single information structure that's uniformly best for all purposes. You have to design for the purposes you believe will be important.
I've written various papers on this topic.
2. It is obvious that people don't create coherent structures by thinking solely at the level of individual lexemes, such as =; they have to think in some kind of higher-level structures. One hypothesis about those structures was the 'plan' structure, espoused by Eliot Soloway and Rob Rist. Where do those plans come from? A suggestion was that they initially come from everyday experience: we know how to count a pile of stones, and we apply the same mental process to programming.
The corollary would be that a language will be easy to learn if the plans it requires conform to everyday plans of the learners. Check out the stuff from Bill Curtis and, especially, John Pane.
3. Yet the language has to express those 'plans' in a way that is readable. In this area (and with due respect to Derek Jones, with whom I agree overall but whose utterances may sometimes be a little more cut and dried than I feel ready for) there are almost certainly uniformities in human abilities that transcend culture and experience. You can build on these.
Studies of human parsing suggest a few important considerations. For instance, if different syntactic constructions are signalled by different lexical markers, they seem to be easier to parse than if they are signalled only by different word order or by deeper aspects. Parsing natural language is a complex business and properties of individual words play a large part (English speakers use properties of verbs a lot) but the presence of effective lexical markers does seem to be important.
We can see how this applies to design of programming languages by comparing Pascal syntax (lots of markers) to say Prolog (very few markers). Assuming we are parsing not for surface syntax, which is easy, but for plans as mentioned above, then Pascal contains more surface-level lexical cues. Applying a psycholinguistic model of human parsing to Pascal and Prolog and shows the effect very clearly (ref below). Unfortunately I never completed an experiment designed to test the same data empirically with professional programmers.
There are other universals or near-universals. For instance, syntactic dependencies do not interleave in natural language (possibly with rare exceptions). So if A.. B is a syntactic pair, and P..Q is another, we usually observe compositions of A..B..P..Q or A..P..Q..B but not A..P..B..Q. (Dutch apparently has an exception, probably there are others.)
But there are traps here. The English embedded sentence structure is very hard: "The boy the girl the man saw liked smiled" is famously hard to understand but the embedded conditional "if A if B if C P else Q else R" is not. You have to think about the psycholinguistics carefully before generalising too far.
Green, T. R. G. and Borning, A. (1990) The Generalized Unification Parser: modelling the parsing of notations. In D. Diaper, D. Gilmore, G. Cockton and B. Shackel (Eds.) Human-Computer Interaction � INTERACT �90. Amsterdam : Elsevier. 951-957.
--- You'll notice that all the stuff I'm putting forward is at or near the surface level. As you go deeper towards semantics things get harder, and at that level I agree with Derek that we don't have much in sight that will be guaranteed to be universal across cultures. But I doubt whether that's your aim. If you're content to stay within one user population, John Pane's work seems to me to be a significant contribution.
You see I'm trying to compare the various types of programming
languages against how closely they match the human thinking process.
The argument I'm trying to make is that if a computer programming
method is developed that is well aligned with the human process of
thought then we shall get more understandable designs.
My "layman" suspicions are that the human mind uses several abstract
representation methods simultaneously and "somehow" weaves in/out off
or integrates there different representations.
We know for sure that several abstract representations are used for some low-level stuff - the famous example is the dual-coding work in psychology - and it seems very likely that you are right. I don't know where that takes you, do you have any thoughts about how to proceed? If so I'd be interested to hear.
Good luck with this question
Thomas Green
27 Allerton Park, Leeds LS7 4ND, UK
+44-(0)113-226-6687
http://www.ndirect.co.uk/~thomas.green
