On 6/13/2011 3:19 AM, Julian Leviston wrote:

On 13/06/2011, at 7:50 PM, BGB wrote:

On 6/13/2011 1:33 AM, Julian Leviston wrote:
On 12/06/2011, at 1:00 PM, BGB wrote:

image-based systems have their own sets of drawbacks though...

dynamic reload could be a "good enough" compromise IMO, if done well...
I don't follow this train of thought. Everything runs in "an image". That's to say, the source code directly relates to some piece of running code in the system at some point. Smalltalk, Self and the like simply let you interact with the running code in the same place as the artefacts that create the running code. It's akin to programming in a debugger that saves the contents of memory constantly as "the source".

except, that traditional source-files have a "concrete" representation as so many files, and, beyond these files, there is nothing really of relevance (at least, conceptually, a person could print a program to paper, re-type it somewhere else, and expect the result to work).

does it rebuild from source? does the rebuilt program work on the target systems of interest? if so, then everything is good.


an image based system, OTOH, often means having to drag around the image instead, which may include a bunch of "other stuff" beyond just the raw text of the program, and may couple the program and the particular development environment used to create it.

[SNIP]

or such...


This brings up an interesting point for me.

"Source" is an interesting word, isn't it? :) Source of what, exactly? Intention, right? The "real code" is surely the electricity inside the computer in its various configurations which represent numbers in binary. This is not textual streams, it's binary numbers. The representation is the interesting thing.... as are the abstractions that we derive from them.


yes, but as a general rule, this is irrelevant...
the OS is responsible for keeping the filesystem intact, and generally does a good enough job, and there one can backups and hard-copies in-case things don't work out (say, a good hard crash, and the OS goes and mince-meats the filesystem...).

as far as the user/developer can be concerned, it is all text.
more so, it is all ASCII text, given some of the inherent drawbacks of using non-ASCII characters in ones' code...


I don't think computer programs being represented as text is very appropriate, useful or even interesting. in fact, I'd suffice to say that it's a definite hate/love relationship. I *love* typography, text and typing, but this has little or naught to do with programming. Programming is simply "done" in this way by me at the moment, begrudgingly because I have nothing better yet.

well, the issue is of course, that there is nothing obviously better.


Consider what it'd be like if we didn't represent code as text... and represented it maybe as series of ideograms or icons (TileScript nod). Syntax errors don't really crop up any more, do they? Given a slightly nicer User Interface than tilescript, you could still type your code, (ie use the keyboard to fast-select tokens), but the computer won't "validate" any input that isn't in its "dictionary" of known possible syntactically correct items given whatever context you're in.


but, what would be the gain?... the major issue with most possible graphical representations, is that they are far less compact. hence, the common use of graphical presentations to represent a small amount in information in a "compelling" way (say, a bar-chart or line-graph which represents only a small number of data-points).

apparently, even despite this, some people believe in things like UML diagrams, but given the time and effort required to produce them, combined with their exceedingly low informational density, and I don't really see the point.

also, for most programming tasks, graphical presentation would not offer any real notable advantage over a textual representation.

at best, one has a pictographic system with a person new to the system trying to figure out just what the hell all of these "intuitive" icons mean and do. at that rate, one may almost as well just go make a programming language based on the Chinese writing system.

given that most non-Chinese can't read Chinese writing, despite that many of these characters do actually resemble crude line-art drawings of various things and ideas.

and meanwhile, many Asian countries either have shifted to, or are in the process of shifting to, the use of phonetic writing systems (Koreans created Hangul, Kanji gradually erodes in favor of Hiragana, ...). even in some places in China (such as Canton) the traditional writing system is degrading, with many elements of their spoken dialect being incorporated into the written language.

this could be taken as an indication that their may be some fundamental flaw with pictographic or ideographic systems.

or, more directly:
many people new to icons-only GUI designs spend some time making use of "tool tips" to decipher the meaning of the icon...


By the way, SmallTalk and Self are perfectly representable in textual forms... ("file out" nod) just like JVM bytecode is perfectly representable in textual form, or assembler... but text probably isn't the most useful way to interact with these things... just as to "edit your text" you most likely use some form of IDE (and yes, I'd class VIM or EMACS as an IDE).


JBC is not usually manipulated as text, since it is generally compiler output. however, textual JBC becomes far more useful if one needs to work directly with the JBC, hence why an ASM syntax for JBC was later created (initially by 3rd parties), despite Sun's original intention for none to exist.

things like compiler output, and the bulk of mechanically generated/processed information, generally falls into a "don't know, don't care" category for the most part, and hence textual representations are generally a lower priority.


Do I need to represent here just how idiotic I think compilation is as a process? It's a series of text stream processors that aim at building an artefact that has little or nothing to do with a world that exists entirely in text. TEXT!!! It's a bad way to represent the internal world of computers, in my opinion. It'd be nice to use a system which represents things a few layers closer to "what's actually going on", and surely the FoNC project is aimed at a pedagogical direction intending to strip away layers of cruft between the image inside the head of a "user" ( or programmer) that they have representing how it works, and how it actually works...


I really don't see the merit though of opposing it...


so long as the process is sufficiently fast, it shouldn't matter that things are represented one way "over here", and some way very differently "over there".


actually, this is a common practice in the use of "black box" development methodologies, where often ones' data or program state is represented in a number of different ways within different components, as each component provides both an internal representation for the data, and a set of external interfaces.

an example is would be an object or character in a 3D scene:
the user sees a unified entity, with all of its physics, its graphical representation, its sound effects, its AI, ...

but, internally, there may be:
a split between the client and server, with only a small number of data-points shared between them (say, where it is at, what model it is using, which animation frame it is using, ...).

then, the server may delegate all the physics off to a physics engine, which itself represents all of the physical properties of the object (these are then shuffled back and forth over an API). so, the physics engine has its own representation of an object, with things like an inertia-tensor, lists of contact constraints, ...

the server doesn't know or care, it just gets a stream of data-points: where the object is, how quickly it is moving, ...

meanwhile, the server doesn't know or care how it is handled on the client (to the server, things like which 3D model it is using, ... are represented as ASCII strings...)

off in the client end, the entity is again broken into multiple parts:
information about the model is passed to the renderer, and any light-sources/... may themselves be separately split off and passed off to the renderer; also, any sound-effects are converted into commands, and passed off to the sound mixer.

then the renderer sees its representation of the object:
as a 3D model sitting in its scene-graph in a specific pose at this moment in time, which it may then perform a number of operations on (figuring out which light sources are applicable, drawing lights and shadows, ...).

and, in the sound mixer, all there is is just a moving point in space working as a sound emitter, as it calculates (from where it is and how quickly it is moving relative to the camera) its effective attenuation (volume, pan, ...), Doppler-shifts, post-processing effects (such as echos or dampening), ...

...

but, the point is that there is no single or unified view of the object, but rather all knowledge of the object is broken down into a large number of subsystems, each knowing small pieces of the problem, and nearly everything else is shuffling information back and forth to keep everything synchronized.

nevermind all the stuff going on, independently, in the video-card, monitor, mouse and keyboard, ...


generally, compiler and VM technology happens to work roughly the same way...

but, why should the user need to know or care, as they work worth the "unified" perception, the box as it falls to the ground, and 3D NPC's which walk around, do things, and say so many words of dialogue to the player.


so, the main issue should be, IMO, not of eliminating text, rather, maybe trying to reduce the inconveniences of rebuilding (ideally, so that the "rebuild" is itself nearly invisible), and getting performance fast enough that the programmer doesn't feel the need to wander off and get coffee or similar every time they have to rebuild their program...

for example, if the environment could be like "well, only this file was changed" and quickly recompile and hot-patch it in to a live program, who cares that a compiler and linker were involved, if they add at most a few milliseconds?...


Mind you, I think human language is fairly silly, too... we communicate using mental "bubbles" of non-language based patterns, rendered into language, formed into text. It's well retarded... but this might be considered a little "out there", so I'll end here.

If I'm providing too much "noise" for the list, please anyone, let me know, and I'll be quiet.

well, I somewhat disagree here...


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to