On 2/29/2012 5:34 AM, Alan Kay wrote:
With regard to your last point -- making POLs -- I don't think we are
there yet. It is most definitely a lot easier to make really powerful
POLs fairly quickly than it used to be, but we still don't have a
nice methodology and tools to automatically supply the IDE, debuggers,
etc. that need to be there for industrial-strength use.
the "basic infrastructure" is needed, and to a large degree this is the
harder part, but it is far from a huge or impossible undertaking (it is
more a matter of scaling: namely tradeoffs between performance,
capabilities, and simplicity).
another issue though is the cost of implementing the POL/DSL/... vs the
problem area being addressed: even if creating the language is fairly
cheap, if the problem area is one-off, it doesn't really buy much.
a typical result is that of creating "cheaper" languages for more
specialized tasks, and considerably more "expensive" languages for more
general-purpose tasks (usually with a specialized language "falling on
its face" in the general case, and a general-purpose language often
being a poorer fit for a particular domain).
the goal is, as I see it, to make a bigger set of reusable parts, which
can ideally be put together in new and useful ways. ideally, the IDEs
and debuggers would probably work similarly (by plugging together logic
from other pieces).
in my case, rather than trying to make very flexible parts, I had been
focused more on making modular parts. so, even if the pipeline is itself
fairly complicated (as are the parts themselves), one could presumably
split the pipeline apart, maybe plug new parts in at different places,
swap some parts out, ... and build something different with them.
so, it is a goal of trying to move from more traditional software
design, where everything is tightly interconnected, to one where parts
are only loosely coupled (and typically fairly specialized, but
reasonably agnostic regarding their "use-case").
so, say, one wants a new language with a new syntax, there are 2 major
ways to approach this:
route A: have a "very flexible" language (or meta-language), where one
can change the syntax and semantics at will, ... this is what VPRI seems
to be working towards.
route B: allow the user to throw together a new parser and front-end
language compiler, reusing what parts from the first language are
relevant (or pulling in other parts maybe intended for other languages,
and creating new parts as needed). how easy or difficult it is, is then
mostly a product of how many parts can be reused.
so, a language looks like an integrated whole, but is actually
internally essentially built out of LEGO blocks... (with parts
essentially fitting together in a hierarchical structure). it is also
much easier to create languages with similar syntax and semantics, than
to create ones which are significantly different (since more differences
mean more unique parts).
granted, most of the languages I have worked on implementing thus far,
have mostly been "bigger and more expensive" languages (I have made a
few small/specialized languages, but most have been short-lived).
also, sadly, my project currently also contains a few places where there
are "architecture splits" (where things on opposite sides work in
different ways, making it difficult to plug parts together which exist
on opposite sides of the split). by analogy, it is like where the screw
holes/... don't line up, and where the bolts are different sizes and
threading, requiring ugly/awkward "adapter plates" to make them fit.
essentially, such a system would need a pile of documentation, hopefully
to detail what all parts exist, what each does, what inputs and outputs
are consumed and produced, ... but, writing documentation is, sadly,
kind of a pain.
another possible issue is that parts from one system wont necessarily
fit nicely into another:
person A builds one language and VM, and person B makes another language
even if both are highly modular, there may be sufficient structural
mismatches to make interfacing them be difficult (requiring much pain
some people have accused me of "Not Invented Here", mostly for sake of
re-implementing things theoretically found in libraries, but sometimes
this is due to legal reasons (don't like the license terms), and other
times because the library would not integrate cleanly into the project.
often, its essential aspects can be "decomposed" and its functionality
is re-implemented from more basic parts. another advantage is that these
parts can often be again reused in implementing other things, or allow
better inter-operation between a given component and those other
components it may share parts with (and the parts may be themselves more
useful and desirable than the thing itself).
this also doesn't mean creating a "standard of non-standard" (some
people have accused me of this, but I disagree), since often all that is
needed is to create something in the form of the standard (and its
common/expected behaviors), and everything will work as expected.
so, the essence is in the form, and in the behaviors, and not in the
particular artifacts which are used to manifest it. so, one implements
something according to a standard, but the standard doesn't really care
whose code was used to implement it (or often, how things actually work
internally, which is potentially different from one implementation to
sometimes, there are reasons to not just chase after the cult of "there
is a library for that", but, annoyingly, many people start raving at the
mere mention of doing some task without using whatever library/tool/...
they think "should" be used in performing said task.
granted, in a few places, I have ended up resorting to relational-style
systems instead (because, sadly, not every problem maps cleanly to a
tree structure), but these are typically rarer (but are more common in
my 3D engine and elsewhere, essentially my 3D engine amounts to a large
and elaborate system of relational queries (and, no, without using an
RDBMS), and only tangentially towards actually sending things out to the
video card). there are pros and cons here.
*From:* Loup Vaillant <l...@loup-vaillant.fr>
*Sent:* Wednesday, February 29, 2012 1:27 AM
*Subject:* Re: [fonc] Error trying to compile COLA
Alan Kay wrote:
> Hi Loup
> Very good question -- and tell your Boss he should support you!
Cool, thank you for your support.
> [...] One general argument is
> that "non-machine-code" languages are POLs of a weak sort, but
> effective than writing machine code for most problems. (This was
> controversial 50 years ago -- and lots of bosses forbade using any
> higher level language.)
I didn't thought about this historical perspective. I'll keep that in
> Companies (and programmers within) are rarely rewarded for
> over the real lifetime of a piece of software [...]
I think my company is. We make custom software, and most of the time
also get to maintain it. Of course, we charge for both. So, when we
manage to keep the maintenance cheap (less bugs, simpler code...),
However, we barely acknowledge it: much code I see is a technical debt
waiting to be paid, because the original implementer wasn't given the
time to do even a simple cleanup.
> An argument that resonates with some bosses is the "debuggable
> requirements/specifications -> ship the prototype and improve
> benefits show up early on.
But of course. I should have thought about it, thanks.
> [...] one of the most important POLs to be worked on are
> the ones that are for making POLs quickly.
This why I am totally thrilled by Ometa and Maru. I use them to point
out that programming languages can be much cheaper to implement than
most think they are. It is difficult however to get past the idea
implementing a language (even a small, specialized one) is by
fonc mailing list
fonc mailing list
fonc mailing list