On Sat, Apr 6, 2013 at 7:10 PM, Julian Leviston <jul...@leviston.net> wrote:

> LISP is "perfectly" precise. It's completely unambiguous. Of course, this
> makes it incredibly difficult to use or understand sometimes.
>

LISP isn't completely unambiguous. First, anything that is *implementation
dependent* is ambiguous or non-deterministic from the pure language
perspective. This includes a lot of things, such as evaluation order for
modules. Second, anything that is *context dependent* (e.g. deep
structure-walking macros, advanced uses of special vars, the common-lisp
object system, OS and configuration-dependent structure) is not entirely
specified or unambiguous when studied in isolation. Third, any operation
that is non-deterministic (e.g. due to threading or heuristic search) is
naturally ambiguous; to the extent you model and use such operations, your
LISP code will be ambiguous.

Ambiguity isn't necessarily a bad thing, mind. One can consider it an
opportunity: For live coding or conversational programming, ambiguity
enables a rich form of iterative refinement and conversational programming
styles, where the compiler/interpreter fills the gaps with something that
seems reasonable then the programmer edits if the results aren't quite
those desired. For mobile code, or portable code, ambiguity can provide
some flexibility for a program to adapt to its environment. One can
consider it a form of contextual abstraction. Ambiguity could even make a
decent target for machine-learning, e.g. to find optimal results or improve
system stability [1].

[1] http://awelonblue.wordpress.com/2012/03/14/stability-without-state/


>
> is it possible to build a series of tiny LISPs on top of each other such
> that we could arrive at incredibly precise and yet also incredibly concise,
> but [[easily able to be traversed] meanings].


It depends on what you want to say. Look up Kolmogorov Complexity [2].
There is a limit to how concise you can make a given meaning upon a given
language, no matter how you structure things.

[2] http://en.wikipedia.org/wiki/Kolmogorov_complexity

If you want to say a broad variety of simple things, you can find a
language that allows you to say them concisely.

We can, of course, create a language for any meaning Foo that allows us to
represent Foo concisely, even if Foo is complicated. But the representation
of this language becomes the bulk of the program. Indeed, I often think of
modular programming as language-design: abstraction is modifying the
language from within - adding new words, structures, interpreters,
frameworks - ultimately allowing me express my meaning in just one line.

Of course, the alleged advantage of little problem-oriented languages is
that they're reusable in different contexts. Consider that critically: it
isn't often that languages easily work together because they often have
different assumptions or models for cross-cutting concerns (such as
concurrency, persistence, reactivity, security, dependency injection,
modularity, live coding). Consider, for example, the challenge involved
with fusing three or more frameworks, each of which have different callback
models and concurrency.

Your proposal is effectively that we develop a series (or stack) of little
languages for specific problem. But even if you can express your meaning
concisely on a given stack, you will encounter severe difficulty when comes
time to *integrate* different stacks that solve different problems.

(Some people are seeking solutions to the general problem of language
composition, e.g. with Algebraic Effects or modeling languages as
Generalized Arrows.)


> one could replace any part of the system because one could understand any
> part of it


Sadly, the former does not follow the latter.

The ability to understand "any part of" a system does not imply the ability
to understand how replacing that part (in a substantive way) will affect
the greater system. E.g. one might look into a Car engine and say: "hmm. I
understand this tube, and that bolt, and this piston..." but not really
grok the issues of vibration, friction and wear, pressure, heat, etc.. For
such a person, a simple fix is okay, since the replacement part does the
same thing as the original. But substantive modifications would be risky
without more knowledge.

Today, much software is monolithic for exactly that reason: to understand a
block of code often requires *implementation* knowledge of the functions it
calls and the larger context. It requires a long time to grok a new
codebase. The same would generally be true of little languages.

Understanding how parts fit into the greater system *without* deep
implementation knowledge is precisely about *compositional properties*. A
set of properties P is compositional if:

  exists f . forall A, +, B . P(A+B)=f(P(A),'+',P(B))

where A and B are compositional operands (aka components) and '+' is a
composition operator.

I'll also quote myself:

I eventually reached a conclusion that *composition* is the most essential
meta-property to help the large scale systems. A set of compositional
properties P has the feature that there exists some function F for which:
forall X,Y,*. P(X*Y)=F(P(X),'*',P(Y)). That is, developers can reason about
properties of a composite with only shallow knowledge of the components.
This becomes a valid basis for intuitions. If critical safety, security,
consistency, concurrency, resilience, robustness, and resource management
properties can be made compositional, then users are able to quickly reason
about those properties and focus the bulk of their attentions instead on
domain modeling and correctness concerns.

Equational reasoning is also useful - not for reasoning about programs...
but for reasoning about CHANGES to programs: refactoring and abstraction
(and optimization). The most important equational reasoning properties are
identity, idempotence, commutativity, and associativity. These are properties
of declarative 
expression<http://awelonblue.wordpress.com/2012/01/12/defining-declarative/>.
(Imperative expression only has associativity and identity.) Functional
programmers often seem to operate under a prideful delusion that "pure"
expression is necessary for rich equational reasoning. But effects,
carefully chosen, can achieve idempotence and commutativity.

http://lambda-the-ultimate.org/node/4606#comment-72986

Support for problem-oriented languages is a form of abstraction. I believe
that composition is much more important than abstraction or purity [3][4].

[3] http://lambda-the-ultimate.org/node/4312#comment-66162
[4] http://lambda-the-ultimate.org/node/4357#comment-67178



it would enable a learning not possible by any other means because it would
> be "able to be inspected/introspected"


One could design a meta-language that helps address this cross-cutting
concern, but it's hardly automatic. Getting good error messages or IDE and
debugger integration across deep layers or series of languages is not easy,
especially if they operate in flexible phases (e.g. where each language may
have some static and some dynamic aspects).



Thus, using would become learning would become programming. This is one of
> my most passionate aims, but unfortunately daily "grind work to pay the
> bills" generally takes away from my efforts.
>

I think there's a lot that could be done to improve the programming
experience and better support learnable programming. I believe it's a
worthy pursuit. Layered languages might be part of that experience, but I
think we'll need to restrict them a lot to achieve composition with respect
to many cross-cutting composition and integration concerns.

Regards,

Dave
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to