On 7/27/2011 9:35 AM, David Barbour wrote:

On Wed, Jul 27, 2011 at 3:41 AM, BGB <cr88...@gmail.com <mailto:cr88...@gmail.com>> wrote:

    a non-turing-complete IL is too limited to do much of anything
    useful with WRT developing "actual" software...


You aren't alone in holding this uninformed, reactionary opinion.

Consider: Do we need Turing power for 3D apps? No. Because we want *real-time* 3D apps, and that means we don't perform arbitrary computations per frame. There is /no/ "actual" software that needs Turing power for functions.


one does need recursion/... for many things to work.

otherwise, one has to develop software in a way completely devoid of recursion, which would be a major hindrance.


Anyhow, I did not suggest being rid of Turing power for the whole application, only moving and controlling access to it externally. ("We might in the general case need Turing power, at the topmost level of a service or application... capable of drawing arbitrary images if given enough resources (time, resolution, et cetera) but the decision on how many resources to provide is external")

That is, we do need software that can 'run forever' and 'compute arbitrary functions'. We just don't need that power in the middle. A top-level loop, based on passage of time, will do the job quite well.


typically, IME, the main place looping/recursion/... is needed is actually in the lower-levels.

granted, it is not desirable that code should go into infinite loops or overflow the stack, but there are ways for a compiler to "safely" deal with both (typically using a hidden execution-counter and stack-depth checking, and basically killing off any code which appears to have "run away").

a similar mechanism is also useful for implementing "soft threading", which is basically where the threading is implemented in the VM (using work-queues or similar), rather than investing lots of (generally considerably more expensive) OS-level threads to the task.


a simple example is drawing a scene using a BSP tree:
the BSP is itself a recursive structure, and so generally requires recursion to draw the thing (even though something is terribly wrong if the BSP drawing doesn't finish in finite time).

likewise for most 3D model formats based on linked-lists of meshes.

or light-and-shadow algorithms (such as depth-pass and depth-fail), ...




    the usual strategy is to use runtime checks for anything which
    can't be determined at compile time.


For a language or IL as finicky as C++, the usual strategy is to sandbox the whole runtime. This offers better performance on the off-chance things are written well, though it does force developers to use obtuse means of communication between extensions and subprograms.

sandboxing is common, but IMO not really necessary, as for sanely written code, the vast majority of runtime checks can be skipped, leaving runtime checks mostly for operations that can't be statically proven (typically constructions which would be either removed or disallowed in more "safe" languages).

granted, there is still the possibility of "meta" edge-cases, such as if one implements their own memory management, and the VM would no longer be able to prove the internal safety of accesses into this heap, potentially resulting in the VM putting in huge numbers of runtime checks (say, implement a custom MM and suddenly find that ones' program is running 10x or 25x slower...).

that, or one could introduce an analogy of C#'s "unsafe" modifier ("_Unsafe"), essentially allowing faster execution in these cases at a cost of requiring a higher level of trust.

so, there are some practical limits as to justifiable coding practices (operations like "malloc()", "new", ... would be left under the VM's control, rather than being direct interfaces to library code as in a more traditional implementation).


say, if "new" is used, then the VM allocates the correct sized memory, and then tries to verify that the program doesn't do anything "unsafe" with it (say, accessing it as an incompatible object type, ...), and if it fails to prove it safe, then any accesses will be checked as appropriate (retrieved pointers being spoof-checked, ...).



    "better" would be to disallow conversions between pointers and
    integers, these are in-fact fairly useful operations.


With dependent typing, pointer arithmetic can be safe. Adam Chlipala's 'Bedrock' supports it. To get dependent typing, you first remove unnecessary power.

say, one types:
i=*(int *)((intptr_t)0xF00BA5L);

(the above being an example of "spoofing", whereby one just pulls a pointer to an object out of thin air, rather than it being derived from a range-checked operation on a prior object).


as well as things like, say:
p=(int *)((((intptr_t)p)+15)&(~15));

(the above being potentially handled by a plain range-check operation, namely verifying that the new pointer is valid for the object referenced by the old pointer. a more naive validator though could use a spoof-check here instead).


typically, one can disallow these things, attempt to prove them as safe in a given context, or hedge them with run-time checks.

in my own language, the present semantics for most range-check failures (generally done per-type rather than as raw pointers) is that the pointer/reference will drop to NULL on a failure.

however, more ideal is that, say, "*(arr+1023)" would result in a "RangeCheckException" rather than a "NullPointerException", but either way...


    these idioms are the defined ways of doing things.

    a language which doesn't support these idioms would be invariably
    rejected by most programmers.


Your language will also, invariably, be rejected by most programmers. If you're going reject ideas that won't be popular, you should give up on language design and find another hobby.


I am not a "language designer" per-se, but rather more of an application developer.
hence, my language-design efforts are more related to my own uses.

FWIW, most of my "serious" development is in C and C++ (and some amount of ASM as well), and so my language serves several purposes:
be used for high-level application scripting;
serve as a surrogate for C or C++ when I need to "eval" something (I don't expect the standards to be adding "eval" any time soon, and also don't themselves make very good scripting languages);
also, ideally, be reasonably similar to Java where possible;
ideally, to not have any major surprises/"gotchas"/... (for a person who is already well-versed in mainstream languages);
...

hence, the language needs to serve a range of uses.

however, I don't expect it to displace either its immediate ancestors (JavaScript or ActionScript), nor the languages it is designed to work with (C and C++).

longer-term, who knows, who cares?... it is mostly right-now which is important, and the future will go whatever way it goes when one gets to it.


But I think you're wrong. People will accept new tools and idioms, if:
* they see or are shown how to utilize them to solve today's problems
* they are motivated by a sufficient gain in features or reduction of pain
* there is good support for legacy systems, so they can transition cheaply * the tools are readily accessible, open source, and easy to use and install

Acceptance won't happen all at once, but it doesn't need to.


I guess a question is time frame.

what may happen in 6 months is not necessarily the same as 5 or 10 years.
I base my opinions mostly on near time history (past 10-20 years or so), and Procedural+OOP wouldn't seem to be going away within a relevant time-frame (say, the next 10 years).

now, whether or not C and C++ are still major application development languages in 10 years is less certain.


however, I expect the next crop of languages will probably need some more features to remain competitive.



    spreadsheets are not programs.
    in the same way, SQL databases, ... are not programs.


You are incorrect. I think you would be surprised at how many people will speak of 'spreadsheet applications' as common-place.

I think this is in reference to "MS Excel" or similar (as the spreadsheet application) rather than the spreadsheets themselves.

most of anything beyond this is generally written in VB or similar, but the amalgam of VB and Excel leading to an application, is more of a VB application than it is a spreadsheet, and VB is (sort of) a real language.


But I agree that spreadsheets and SQL are not sufficiently powerful to develop many useful domains of services and apps.

yep.



    however, dataflow-like constraints are a nifty feature, and one of
    my earlier languages had them (they are sort of like values, but
    their apparent value will change whenever one of the input values
    is modified, ...). IIRC, a flush/recalculate model was used.


Ah, another of those declarative languages with imperative glue?

For dataflow to really serve as a language of its own, it must be both bi-directional (effect and observe, push and pull, effective state management) and contingent (some form of if/then/else internal to the dataflow) and eliminate need for imperative bindings in the middle. We'll still need imperative for legacy adapters, but the idea is to slowly creep and replace the legacy.


I don't personally expect dataflow to replace imperative, but more be yet another feature to be piled on top.

basically, programming languages becoming amalgamated masses of features, with the choice of "style" being more of an immediate preference, rather than something imposed by the language.


maybe in 20 years, a language like C++ likely looking naive and simplistic vs the languages existing at the time (where in-fact understanding the entirety of the core language specification may be beyond the reach of any single person, and the standards are split into a number of sub-standards specifying various aspects of the language).

then one has a compiler and finds it to be, say, 35 Mloc, and an OS kernel is somewhere around 500 (say, if in 20 years average code sizes inflate by approx 10x).

now, whether or not this would be the "ideal" future is potentially a matter of debate, followed by whether or not this is a "likely" future.



    non-algorithmic software is generally not, by definition, software.
    more correctly it can be described as data and file-formats.


You have an odd definition of 'software'. I suppose you go around claiming that BGBScript programs aren't "software", it's just file format containing data that can be interpreted?


well, "code is data", however, BGBScript is technically an algorithmic language. however, yes, they are not "software" in themselves, as they are not stand-alone, but depend on a VM famework and host-application in order to work (these in turn written mostly in C and similar).

there is no real immediate use in making it standalone though, given it is, after all, not much more than a scripting language.


Software is what describes behavior for programmable hardware. "Algorithmic" software is software that focuses on the internal computation of a function, as opposed to workflows or control relationships.

a language generally needs to deal with all of these cases to be useful for developing software.

also it generally needs to be capable enough of self-expansion to not be "owned" by "glass ceiling" effects.



    there is nothing really wrong with the current methodology, more
    just it needs more features


If there was nothing wrong, then error and buggy code would not be /systemic/ problems, patches and upgrade would not need human intervention, clients could put useful code on servers to abstract new protocols, and distribution would be straightforward.

bugs and debugging are likely inevitable though.



    I am thinking more like traditional application software coming
    down over the internet (say, in a manner partly between the Web
    and something like Valve's Steam). or, maybe sort of like Android
    Market or similar...


The idea of rich internet applications appeals to me, too. But I think the real win will be zero-tier architecture: we write our services and apps together, and the appropriate amount of the service (i.e. the stuff that twiddles the video) is downloaded to the client. And other parts move to resources owned by the client, in the cloud - e.g. storage that is intended to survive loss of the machine.

One of my interests is market integration. I find Nick Szabo's work on e-markets quite fascinating.

I don't personally think that the elimination of local storage would be ideal.

also network latency and bandwidth are likely to remain as issues, as even as bandwidth gets higher, so too will file sizes, so lots of local storage remains as a necessary resource.

however, ideally better network filesystem resources will start coming around, such as if CIFS2 ended up being generally implemented and well supported, and so mounting network drives over the internet became more workable (like, say, CIFS2+SSL).

an example would be, say, a person has a tablet, and then mounts a drive from their home-system onto their portable device, ...

also nicer if IPv6 becomes common, and NAT starts to go away, and maybe also a nicer way to have DNS entries for personal servers (like DynDNS, but more general, and more free...).



    unnecessary or drastic change may often be seen as evil.

    hence, the status quo is king...


A despotic king, perhaps.

I do agree with the need for a smooth transition strategy. "Drastic" change simply doesn't happen, and trying to force the issue causes people to panic and do stupid things.

yep, hence I figure Procedural+OOP will live on, but maybe some aspects of it will fade from popularity (much like goto and abuse of pointer arithmetic).

it is in some ways surprising that Java has gained as much ground as it has, given how severe of a departure it was from C++.



    the best thing is to keep everything just as it is, so that older
    programs/methodologies/... continue to work until which point they
    fade away on their own (sort of like Fortran and COBOL...).


For things to 'fade away', you must be transitioning to something new. Change requires reaching for something 'different'. Progress requires reaching for something 'better'. Trying to "keep everything just as it is", if you were successful (which you won't be), would not result in anything fading away.

one adds on new features for the new things, and has them alongside the old ones, then people pick which they like more.

it is like, Fortran, COBOL, C, ... coexisted, and people began migrating away from Fortran and COBOL for developing new software (and largely over to C and C++). later on, Java and C# came along, and ate up some of what might have otherwise been done in C++.


now, if a language has the combined featureset of, say, C, Java, C#, ActionScript, ... then people can pick and choose which features to use and when.

pointers and structs when needed, classes when needed, dynamic classes and or prototype-OO when needed, as well as closures, eval, ...

for the most part, it is all win.



    the program would not be running in the browser anyways, but
    simply distributed partly via the browser, and then run primarily
    on the host computer (as standalone applications).


I do not believe the future will support a strong distinction between browser apps and standalone apps. But apps that run offline, I think are likely.


potentially...

one needs both more powerful browser apps (more like standalone apps), and probably also the ability to make "web-apps" which have a look and feel more like traditional standalone apps (traditionally, UIs built inside the browser generally suck a lot more than equivalent UIs built as standalone apps).

I don't personally believe building a mountain on top of the browser and running everything in tabs to be a "sane" solution, but figure probably better would be to migrate more browser-level functionality into standalone libraries (and in a way which is less OS specific, unlike say the present Windows/Internet-Explorer thing...).


an example would be, say:
if GNome depended less on specifics of Linux and X11;
if GNome was also readily portable to Windows (absent ugly cruft and refusal to build and requirement to run an X server, such as X.org, on top of Windows); if FireFox and GNome components were better integrated (ideally, if both were more modularized, as simply sticky-gluing them together would be a much worse solution);
if GObject/... were ideally less horrid;
...

the ultimate goal then would be to essentially have cross-platform OS APIs, such that cross-platform apps don't need so many OS specifics.

the "ideal" solution would be if cross-platform GUI/network/... were no more painful than OpenGL is at present (and where generic filesystem APIs can transparently and portably access network storage).


like, say, one just opens a URL, and the specifics of HTTP or CIFS or ... are handled entirely by the underlying OS or framework.

say, if one could type something like:
FILE *fd;
fd=fopen("http://myserver/myfile.txt";, "rt");
or:
fd=fopen("cifs://myserver/myshare/mydir/myfile.txt", "rt");
or:
...

and have it "just work"...

on Windows, it "sort of" works with UNC, but seems to depend on which API calls are used, requiring use of Win32 API calls rather than plain "fopen()", which seems only able to access local drives and shares mounted as network drives, ...


granted, if the OS can't do it, ideally whatever framework can provide a VFS API which makes it work (GVFS does something like this).

for example, Valve's Steam also provides a VFS, which can do a few nifty things (mostly provide access to contents in its "GCF" files), but is notably sort of nasty and a pain to get access to (ideally, the API should be reasonably direct and self-initializing, rather than requiring any manual library loading, importing interfaces, or initialization calls).


my own framework in turn provides its own VFS API, mostly to try to provide a "generic" interface to all of the above (because I don't want to sit around having to jerk around with all of this in application code). effectively this is done via "VFS drivers" which are basically registered vtables for FS-level and file-level operations (originally, the basic design idea came partly from early versions of the Linux kernel, although the way mounts/paths/... is a bit different, ...).


or such...


_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to