-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,
In my ongoing (lazy) attempt to understand Epigram, I'll try to
summarize what distinguishes the language (mostly its design goals,
rather than just what's been implemented so far).  I hope I will be able
to write tutorials/documentation sometime in the future that would be
understandable to people like me. Comments please! (e.g. if my
understanding is wrong or confused) (I tried Epigram 1 a little, but
gave up after the interface frustrated me too much)


General observations

- - purely functional, of course

- - The core language is total.  Type-checking is decidable.  This is
achieved by non-Turing-complete structural recursion.
Turing-completeness can abstracted the same way IO is (or
unsafePerformed like IO is, perhaps?).
  - As a consequence of totality, strict vs. lazy (non-strict) in the
main language is merely an implementation issue.  It doesn't matter if
_|_ is lifted or unlifted if it can't exist in the first place!
  - is this structurally-recursive language expressive enough to achieve
the same asymptotic complexity as any general functional algorithm
(assuming it is a function that can be expressed at all by structural
recursion) (e.g. it being possible to implement sort in O(n log n) in
the worst case) (in space as well as time, I suppose) ?

- - can explicitly specify anything that's normally implicit, e.g. types
as arguments.  E.g. id_Int could be the identity function from Ints to
Ints; also, the vec_n x function from the epigram (1) tutorial.

- - the compiler should be willing to compile code that it can't see why
it's okay.    I suppose this could be implemented by undefined behavior
a la C, or runtime checks similar to pattern match failure in Haskell'98.

- - the type system is sophisticated enough that it is very worthwhile
having an editing environment that utilizes that information to help you
program.


Epigram's types

- - Dependent types. Languages like Haskell are gaining many type-system
extensions or proposals to make the type system more expressive with
less "type-system hacks"; Occam's Razor suggests that full dependent
types are a nicer approach (and probably lead to better code re-use).
Also dependent types allow the Curry-Howard isomorphism between programs
and proofs to be directly represented.

- - Observational Type Theory is the dependent type theory currently being
developed.  The reason there is concern over different type theories
(intensional, extensional...) is because it is far from trivial to come
up with a good way to determine the equality between two type-level
expressions in an environment sophisticated enough that sometimes the
programmer needs to explicitly provide evidence.

- - Case analysis can provide information within each branch about the
type of the analyzed argument, and thereby information about other
values.  For example, in the function indexing a length-n vector with an
index type parameterized by the index's maximum value, the type checker
notices that these are the same n.  So the cases that are impossible
(such as reaching the end of the vector but the index still being
larger) don't need to be mentioned (which is a very good thing, since
there is probably not any type-correct definition that could go there
anyway).


Miscellaneous

- - a basic advantage over code-generation capabilities of proof tools
such as Coq (even though these could theoretically be good,
sophisticated compilers) is that algorithm efficiency matters and so
algorithms should be clearly specified in the source language.  Also,
*heuristics* to find a proof(program) don't mesh well with a desire for
reliable, decidable type checking?

- - The logic system is constructive. No Law of the Excluded Middle,
because how do you reasonably evaluate something based on what you know
it isn't? Or because it would be useless because it is not known how to
find it to be true or to find it to be false, and there is no use in
proving a negative? I don't think I know what I'm talking about here :)

- - Sometimes it looks like proper tail-calls are lost, but values of
single-inhabitant types, as well as values that the compiler can prove
aren't necessary at runtime (for whatever other reason) to compute the
result, can simply be eliminated.


even more miscellaneous

- - 0, 1, 2... Each natural number type contains all preceding numbers.
So there are no inhabitants of 0, only one inhabitant of 1, two
possibilities for 2, three for 3, etc. Similar the common set-theoretic
definition of natural numbers.

- - what work has been done on complexity analysis of (structurally
recursive) programs? how feasible is it? how much can it be done
automatically and/or how can programmers help prove complexity bounds of
their programs? Although it is theoretically computable I don't want to
wait for the type-checker to find 2^300 seconds to compute some
exponential-complexity function ^_^

- - some mysterious notation in some papers' formalizations is at least
described a little here <http://en.wikipedia.org/wiki/Sequent_calculus>
or at least <http://en.wikipedia.org/wiki/Sequent>. I don't know if I
understand how it's being used yet... Also, one needs to know the "order
of operations" of the various symbols, although often only one order
makes sense anyway.


Hoping I can be helpful somehow to make up for being so verbose

Isaac

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFF2GxtHgcxvIWYTTURAsWsAJ0bVZjacWA5mqqn1BmPME/xxHi8zgCfWEig
/+xkEqFCex5yyVKDpYj1i7M=
=sUzY
-----END PGP SIGNATURE-----

Reply via email to