[ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]

It seems to me like "types" can be many things. From a static type system 
perspective, types are something very specific. This is how we reach the 
description of Javascript as "untyped" or "unityped" even though from another 
perspective it absolutely does have types, and programmers use these types to 
structure and reason about their programs.

The static type system of ML does some reasonable/decidable amount of checking, 
after which it hands everything else to a dynamic checker. So is it not 
appropriate to use the word "types" when reasoning about expression 
classifications that cannot be captured by static types? Programmers constantly 
categorize computations in terms of "fast", "slow", "runs in less than a 
second", "computes on the fly", "serves precomputed data", "in a dirty state", 
"sensitive to logged-in state", "touches the DOM", let alone more basic things 
like "lowercase string", "positive number", "very large number", and so on. Web 
programmers will routinely split an architecture into code that "might touch 
the DOM" and "DOM-independent code".

It seems to me that if we talk about types as "enabling", these 
yet-unverifiable types are the most enabling — the most useful in building 
large architectures and the most prevalent in programming culture and in 
discussion around programs. I can bet that the most relevant discussion about 
"types of data" and "types of code" in the Linux kernel, say, is not about 
whether the right kind of arguments are being passed into functions, but rather 
about much more more complex and important classifications of code that have to 
do with performance, security, code design and clarity, and so on, and are 
necessarily enforced dynamically or by practicing a disciplined style. (And 
these would have to be enforced in the same way in ML as well.)

I think of these as "types" even though they are yet largely out of reach for 
programming language researchers.

Marius


On May 13, 2014, at 8:25 AM, Greg Morrisett <[email protected]> wrote:
> [ The Types Forum, http://lists.seas.upenn.edu/mailman/listinfo/types-list ]
> 
> One thing this whole conversation ignores is the role of types
> in compilation.  Even early FORTRAN compilers needed to make
> distinctions (integral versus floating-point) in order to 
> (a) determine how much space to allocate, (b) [possibly] align
> values in memory, and (c) to dispatch to appropriate instructions
> for doing arithmetic (i.e., "+" was overloaded.)  These
> distinctions are still relevant and important today.
> 
> Indeed, types have played a central role in efficient
> compilation of languages, and in that sense, are extremely
> enabling.  A uni-typed language, in the parlance of Harper,
> requires a universal representation, which usually means
> a level of indirection, perhaps additional tags so that
> we can discriminate, etc.  
> 
> Today, a good compiler for a uni-typed language first reconstructs
> the underlying "Scott-types" (or some refinement thereof) in
> the hopes of avoid the costs of universal representations.
> (See e.g., the JVML, LLVM, GHC's IR, etc.)  So the practice
> and theory are quite aligned here:  types form the foundations,
> and the illusion of a dynamically-typed setting is there for
> convenience of the user.  
> 
> Even in the uni-typed languages, not all operations are allowed
> on all values (e.g., trying taking the car/head of a lambda),
> even though these operations are certainly defined at the
> bit-level.  So there is some imposition of abstraction before
> the fact, enabling reasoning at the level of the language's
> abstract machine, instead of the concrete bit-level.  
> 
> I suppose if you turn your head around backwards enough, you
> see this as an "imposition" on your God-given right to 
> do what you want with the bits, but others see this as a
> constructive way to define a language.  
> 
> -Greg
> 
> 
> 

Reply via email to