On 6/6/2011 6:05 PM, David Barbour wrote:


On Sat, Jun 4, 2011 at 10:44 AM, Julian Leviston <[email protected] <mailto:[email protected]>> wrote:

    Is a language I program in necessarily limiting in its expressibility?


Yes. All communication architectures are necessarily limiting in their expressiveness (in the sense defined by Matthias Felleisen). For example, can't easily introduce reactivity, concurrency, and constraint models to languages not already designed for them. Even with extensible syntax, you might be forced to re-design, re-implement. and re-integrate all the relevant libraries and services from scratch to take advantage of a new feature. Limitations aren't always bad things, though (e.g. when pursuing security, scalability, safety, resilience, modularity, extensibility, optimizations). We can benefit greatly from favoring 'principle of least power' in our language designs.

interesting... the "principle of least power" is something I hadn't really thought about previously...



    Is there an optimum methodology of expressing algorithms (ie
nomenclature)?

No. From Kolmogorov complexity and pigeon-hole principles, we know that any given language must make tradeoffs in how efficiently it expresses a given behavior. The language HQ9+ shows us that we can (quite trivially) optimize expression of any given behavior by tweaking the language. Fortunately, there are a lot of 'useless' algorithms that we'll never need to express. Good language design is aimed at optimizing, abstracting, and refactoring expression of useful, common behaviors, even if at some expense to rare or less useful behaviors.

yeah...

I think many mainstream languages show this property, as they will often be specialized for some sets of tasks, but more far reaching features (ability to extend the syntax or core typesystem, ...) are generally absent.



    Is there a good or bad way of expressing intent?


There are effective and ineffective ways of expressing intent.

We certainly want to minimize boiler-plate and noise. If our languages impose semantic properties (such as ordering of a collection) where we intend none, we have semantic noise. If our languages impose syntactic properties (such as semicolons) where they have no meaning to the developer, we have syntactic noise. If our languages fail to abstract or refactor some pattern, we get boiler-plate (and recognizable 'design patterns').

But we also don't want to sacrifice performance, security, modularity, et cetera. So sometimes we take a hit on how easily we can express intent.


yeah...

usually with semicolons, it is either semicolons or significant line-breaks (or hueristics which try to guess whether a break was intended).
semicolons then are the lesser of the evils.

granted, yes, one wouldn't need either if the syntax were designed in a way where statements and expressions were naturally self-terminating, however, with common syntax design, this is often not the case, and so extra symbols are needed mostly as separators or to indicate the syntactic structure.


    is there a way of patterning a language of programming such that
    it can extend itself infinitely, innately?


Yes. But you must sacrifice various nice properties (e.g. performance, securability, modularity, composition) to achieve it.

If you're willing to sacrifice ad-hoc extension of cross-cutting features (e.g. reactivity, concurrency, failure handling, auditing, resource management) you can still achieve most of what you want, and embed a few frameworks and EDSLs (extensible syntax or partial evaluation) to close the remaining expressiveness gap. If you have a decent concurrency and reactivity mode, you should even be able to abstract and compose IOC frameworks as though they were normal objects.


yep, and often a lot of this isn't terribly useful in practice.

and, likewise, a lot of "advanced" functionality can be added more narrowly:
API functionality;
special purpose attributes or modifiers;
...


personally, I keep around a few "high power" concepts, but these are far less than I could do.

for example, I had gotten in arguments with someone before about my languages' lack of macro facilities or user-defined syntax extensions (or, at least, in-language syntax extensions).

this was partly because both would open up additional and somewhat more awkward issues, for example, macros (in the Common Lisp sense) could risk exposing an uncomfortable number of implementation details. likewise goes for extensible syntax.

some basic amount of extension is possible though mostly by registering callbacks, and at most levels of the tower it is possible to register new callbacks for new functionality (this is actually how a fair amount of the VM itself is implemented).

most things are generally in a form more like "how do I perform operation X given Y?", so lots of registering handlers for various operations, and registering predicate operations, ...


however, even then, extensibility is not often a terribly high priority, and even then, I still try to retain modularity and abstraction wherever possible.

ideally, a piece of machinery should be a black box, and the less one needs to know about its internals to work with it, the better. usually, this means keeping the functionality of each part as simple as is reasonable, and preferring composition over that of extension.


IMO, there are often better ways to try to approach things than trying to always shove more fields and methods into existing structures, as often this will fairly quickly turn into a bloated and unmaintainable mess...

so, my personal preference is often to have operations "over" certain types, rather than try to put the operations "into" the types.

also, evaluating predicates is a fairly major thing as well ("if(P(X))Y").


or such...

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to