On 7/27/2011 2:12 AM, David Barbour wrote:
On Tue, Jul 26, 2011 at 11:14 PM, BGB <[email protected] <mailto:[email protected]>> wrote:

    one can support ifdef blocks in the IL, no real problem there.


Those seem like a problem all by themselves. Definitions are inflexible, lacking in domain of language types and lack effective support for complex ad-hoc decisions that would normally enter 'if' conditionals. Definitions are typically rigid, i.e. you cannot choose some definitions to be constant (and specialize on those) while others are mutable like regular variables. The scope of a definition is not controlled by semantics in the language, like a regular variable is. With separate compilation, definitions are often inconsistent across modules, and configuration management easily becomes a problem.


if a person needs it to be determined at run-time, then they can use constructions such as, say:
if(defined(FOO) && bar)
    ...

which would be handled as a run-time expression.

granted, yes, "if()" and "ifdef()" are not strictly equivalent WRT semantics.

either way, in my language, they are not the same as C or C++ ifdefs, because they are delayed until much later (in the interpreter, they will be actually evaluated as run-time expressions, the main idea being that blocks which are not executable will be skipped).



    Forth and PostScript are not "unnecessarily powerful".


Sure they are. We might in the general case need Turing power, at the topmost level of a service or application, but the vast majority of a program does not need that power. By definition, any power you don't need is unnecessary.

Consider PostScript: here's a language that can spend an infinite amount of time drawing in minute detail on the upper-left square inch of your document. Now, reasonable people won't write programs that do that, but they can. You should say: if they have this power and they aren't using it, they clearly have more power than they need.

For developing a zoomable UI, a very nice property would be a language that always draws 'big' details before small ones (level of detail), and that allows you to draw just the visible portions or nearly so (culling). Through a lot of careful design efforts, you can create a language that has these properties - but, in turn, you will need to sacrifice some power. Now, that zoomable UI language could still be capable of drawing arbitrary images if given enough resources (time, resolution, et cetera) but the decision on how many resources to provide is external - which is as it should be.

Really, unnecessary power is just a currency you should recognize and trade for useful features.

a non-turing-complete IL is too limited to do much of anything useful with WRT developing "actual" software...



    if the HLL is somewhere along C or C++ lines (with pointers and
    OOP and so on), then the IL being powerful is not the issue.


Agreed. If your alleged 'high' level language happens to be unsafe and undefined for many operations, then you have much bigger issues than the quality of your IL.

the usual strategy is to use runtime checks for anything which can't be determined at compile time.

for example, a VM could insert runtime bounds-checks where needed, and also detect/trap any cases of "pointer spoofing" (basically, where an integer->pointer conversion would result in an invalid reference, ...). "spoof-traps" could also be used in cases of unions or structs/... which couldn't be proven safe.

granted, although "better" would be to disallow conversions between pointers and integers, these are in-fact fairly useful operations.


(my own language, however, doesn't currently allow either direct pointer arithmetic or pointer<->integer conversions).



    a language much weaker than [Java or C#] is probably too weak to

    be really usable in any non-trivial context.


It does seem that way, at least to someone who is only familiar with the C++ idioms.

these idioms are the defined ways of doing things.
a language which doesn't support these idioms would be invariably rejected by most programmers.



    dataflow languages currently lack mainstream acceptance as
    application-development languages.


Yet, synchronous reactive systems are widely accepted for real-time mission-critical applications. And dataflow systems are the basis for some of the most popular user-programmable application software ever (spreadsheets). Dataflow UIs are not inherently difficult to develop (though legacy integration is a pain, e.g. OpenGL strongly assumes procedural expression).


spreadsheets are not programs.
in the same way, SQL databases, ... are not programs.

however, dataflow-like constraints are a nifty feature, and one of my earlier languages had them (they are sort of like values, but their apparent value will change whenever one of the input values is modified, ...). IIRC, a flush/recalculate model was used.


But I think you skipped the point. The issue of non-algorithmic software still strongly favors distributing a higher level language.

non-algorithmic software is generally not, by definition, software.
more correctly it can be described as data and file-formats.



    Procedural + OOP is probably a much better bet.


Well, they're the popular bet anyway. And today, we have a lot of buggy application software that takes too long to develop. I wonder if there's a causal relationship in there somewhere.

You won't be at any risk of improving things by betting on Procedural + OOP.


there is nothing really wrong with the current methodology, more just it needs more features (for example, mainstream OOP languages have generally lacked closures and eval and similar, which can be limited).


more recently:
Java and C# have added closures and eval;
the C++ standard has added partial closures (they are not full closures in that they don't survive past the end of the parent scope, so are more like GCC's inner functions...).

now, there are many more features that would be nice, but these mainstream languages are slow moving, and so if one really wants the language they want, one can design and implement it for themselves.


    (4) The web model today is /extremely/ constrained [...]
    can't really make sense of the above.


I'll rephrase and summarize: The web is seriously 'gimped'. There is a lot of low-hanging fruit for code-distribution. The barrier is security - authority, resource control, composition - people have reason not to trust the distribution programs. A secure, composable language can reach further and support valuable new features you've never thought about because they seem impossible today.

possibly, but the other option is to just do like Flash and Silverlight, but more so (bigger and more powerful).

say, a VM like flash except it can deliver C and C++ code, safely, and deal effectively with MLOC-scale applications.



    doesn't matter much for applications, as you don't generally want
    the user to know how it works.
    normally, the program is expected to be a sort of sealed black-box
    for the creators' eyes only.

    granted, this is not to say that FOSS people/... can't distribute
    in source form, but not everyone

    should *have* to distribute in source form.


In most cases - business apps, for example, or apps tied to a particular service - developers don't care whether the user looks under the hood. For applications, users are starting to prefer support for alternative UIs, e.g. an app might be a service with a small web-server on a configured port.

The path of least resistance should be distribution in source form.

for web-apps and web-pages maybe.


I am thinking more like traditional application software coming down over the internet (say, in a manner partly between the Web and something like Valve's Steam). or, maybe sort of like Android Market or similar...

say, one uses a program like Office or a 3D modeling app or similar, which proceeds to pull down all of its program code and data files, ... and installs them onto the local system.

so, probably, people pay for the software, or do the whole thing of buying software to get retail-codes which they then type into their software-manager (or whatever it is called) which then authenticates and downloads the software. or if better-integrated with the internet, the retail-codes can travel directly from their shopping-cart to the software-manager.



    it is like, the children peek behind the curtain, start messing
    with things, ..., and find themselves in the world of copyright
    infringement and IP law, and/or find themselves or their parents
    being faced with a lawsuit as a result.


    keeping proprietary code hidden away thus also serves the purpose
    of helping to prevent these "innocent little children" from
    unintentionally committing criminal acts, ...


You do have quite an imagination when it comes to protecting your status quo. Obfuscators are easily available to those companies interested in protecting innocent children.


unnecessary or drastic change may often be seen as evil.

hence, the status quo is king...


it is much like taking away someones' ability to get up in the morning, get a cup of coffee, and maybe sit down to read stuff on the internet or similar.

try to replace their coffee with tea (or even give them coffee in a different-than-preferred style), and one is asking for a riot and/or outright rebellion.

so, the best thing is to keep everything just as it is, so that older programs/methodologies/... continue to work until which point they fade away on their own (sort of like Fortran and COBOL...).


        Any good distribution language /will be/ high level, though
        not /because/ it's high level. There are quite a few
desiderata for a web language.

    however, a lower-level language will be more abstracted from the
    high-level language, and more opaque-looking for prying eyes
    (likely more important for commercial people, one wants the code
    as difficult to get at as can reasonably be done).


The most effective solution to protect code is also one of the best: write a server, don't put those 'trade-secret' algorithms on the client.


only works for client/server apps.
if most of the bulk is on the client, it isn't going to work.


A second effective solution is to use an obfuscator.


possibly, but the problem is that most obfuscators I have seen mostly just rename variables and replace things with gotos and add some garbage and similar, which is a bit weak (a person can likely just read over it and de-obfuscate it in their heads).


But your words do suggest another option: a high-level program can model an abstract machine and 'interpret' a lower-level program. I've seen this done before: a website compiles apps to bytecode, and a developer wrote a JavaScript library to interpret the bytecode. Or, similarly, there's an x86 interpreter in JavaScript, and an example involving the Linux kernel running in JavaScript.


possibly, using JavaScript as a partial deployment model, assuming nothing better was available.

more likely though, the program would not be running in the browser anyways, but simply distributed partly via the browser, and then run primarily on the host computer (as standalone applications).

Adobe already does something fairly similar for some of its products (basically, one installs the programs directly via the browser, rather than, say, downloading an installer EXE somewhere and running it...).


but, grr... stupid programs... I don't need covert McAfee or Yahoo Toolbar or similar just randomly getting installed whenever programs decide to update themselves...


or such...

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to