On 3/14/2012 11:31 AM, Mack wrote:
On Mar 13, 2012, at 6:27 PM, BGB wrote:

<SNIP>
the issue is not that I can't imagine anything different, but rather that doing 
anything different would be a hassle with current keyboard technology:
pretty much anyone can type ASCII characters;
many other people have keyboards (or key-mappings) that can handle 
region-specific characters.

however, otherwise, typing unusual characters (those outside their current 
keyboard mapping) tends to be a bit more painful, and/or introduces editor 
dependencies, and possibly increases the learning curve (now people have to 
figure out how these various unorthodox characters map to the keyboard, ...).

more graphical representations, however, have a secondary drawback:
they can't be manipulated nearly as quickly or as easily as text.

one could be like "drag and drop", but the problem is that drag and drop is 
still a fairly slow and painful process (vs, hitting keys on the keyboard).


yes, there are scenarios where keyboards aren't ideal:
such as on an XBox360 or an Android tablet/phone/... or similar, but people 
probably aren't going to be using these for programming anyways, so it is 
likely a fairly moot point.

however, even in these cases, it is not clear there are many "clearly better" 
options either (on-screen keyboard, or on-screen tile selector, either way it is likely 
to be painful...).


simplest answer:
just assume that current text-editor technology is "basically sufficient" and call it 
"good enough".
Stipulating that having the keys on the keyboard "mean what the painted symbols 
show" is the simplest path with the least impedance mismatch for the user, there are 
already alternatives in common use that bear thinking on.  For example:

On existing keyboards, multi-stroke operations to produce new characters 
(holding down shift key to get CAPS, CTRL-ALT-TAB-whatever to get a special 
character or function, etc…) are customary and have entered average user 
experience.

Users of IDE's like EMACS, IntelliJ or Eclipse are well-acquainted with special 
keystrokes to get access to code completions and intention templates.

So it's not inconceivable to consider a similar strategy for "typing" non-character graphical elements.  One 
could think of say… CTRL-O, UP ARROW, UP ARROW, ESC to "type" a circle and size it, followed by CTRL-RIGHT 
ARROW, C to "enter" the circle and type a "c" inside it.

An argument against these strategies is the same one against command-line 
interfaces in the CLI vs. GUI discussion: namely, that without visual 
prompting, the possibilities that are available to be typed are not readily 
visible to the user.  The user has to already know what combination gives him 
what symbol.

One solution for mitigating this, presuming "rich graphical typing" was desirable, would 
be to take a page from the way "touch" type cell phones and tablets work, showing symbol 
maps on the screen in response to user input, with the maps being progressively refined as the user 
types to guide the user through constructing their desired input.

…just a thought :)

typing, like on phones...
I have seen 2 major ways of doing this:
hit key multiple times to indicate the desired letter, with a certain timeout before it moves to the next character; type out characters, phone shows first/most-likely possibility, hit a key a bunch of times to cycle though the options.


another idle thought would be some sort of graphical/touch-screen keyboard, but it would be a matter of finding a way to make it not suck. using on-screen inputs in Android devices and similar kind of sucks: pressure and sensitivity issues, comfort issues, lack of tactile feedback, smudges on the screen if one uses their fingers, and potentially scratches if one is using a stylus, ...

so, say, a touch-screen with these properties:
similar sized (or larger) than a conventional keyboard;
resistant to smudging, fairly long lasting, and easy to clean;
soft contact surface (me thinking sort of like those gel insoles for shoes), so that ideally typing isn't an experience of constantly hitting a piece of glass with ones' fingers (ideally, both impact pressure and responsiveness should be similar to a conventional keyboard, or at least a laptop keyboard); ideally, some sort of tactile feedback (so, one can feel whether or not they are actually hitting the keys); being dynamically reprogrammable (say, any app which knows about the keyboard can change its layout when it gains focus, or alternatively the user can supply per-app keyboard layouts); maybe, there could be tabs to change between layouts, such as a US-ASCII tab, ...
...

with something like the above being common, I can more easily imagine people using non-ASCII based input methods.

say, one is typing in US-ASCII, hits a "math-symbol" layout where, for example, the numeric keypad (or maybe the whole rest of the keyboard) is replaced by a grid of math symbols, or maybe also have a "drawing tablet" tab, whereby they can use the keyboard sort of like a drawing tablet or similar, probably with photo-app controls off to the side (hit the "airbrush" key and start air-brushing fingerpainting style...).


as-is though, doing much significantly different on current keyboards is about like trying to type text messages on cell-phones, or, IOW, likely so be a slow and kind of painful experience.

yes, there are selectable keyboard layouts in the OS, but this kind of sucks, and there is no visual feedback to help a person learn which key is which, and putting little stickers on the key-caps would also kind of suck.



<SNIP>
On Mar 13, 2012, at 6:27 PM, BGB also wrote:


I'll take Dave's point that penetration matters, and at the same time, most "new ideas" have 
"old idea" constituents, so you can easily find some matter for people stuck in the old 
methodologies and thinking to relate to when building your "new stuff" ;-)

well, it is like using alternate syntax designs (say, not a C-style "curly 
brace" syntax).

one can do so, but is it worth it?
in such a case, the syntax is no longer what most programmers are familiar or 
comfortable with, and it is more effort to convert code to/from the language, …
The degenerate endpoint of this argument (which, sadly I encounter on a daily basis in 
the larger business-technical community) is "if it isn't Java, it is by definition 
alien and to uncomfortable (and therefore too expensive) to use".

yes, this is the case for many developers.

many others are more comfortable within a limited range of languages (say: Java, C++, and C#), but will have little tolerance for pretty much anything outside of this range.


We can protest the myopia inherent in that objection, but the sad fact is that perception 
and emotional comfort are more important to the average person's decision-making process 
than coldly rational analysis.  (I refer to this as the "Discount Shirt" 
problem.  Despite the fact that a garment bought at a discount store doesn't fit well and 
falls apart after the first washing… not actually fulfilling our expectations of what a 
shirt should do, so ISN'T really a shirt from a usability perspective… because it LOOKS 
like a shirt and the store CALLS it a shirt, we still buy it, telling ourselves we've 
bought a shirt.  Then we go home and complain that shirts are a failure.)

practically though, the mainstream languages do tend to be fairly solid at doing the things they do.

much of the risk comes with less common and "unproven" languages, which often have a much less solid implementation (in terms of things like performance, reliability, library completeness, ...) which tend to be much more what "serious" software development regards as important.

also, it is generally regarded as desirable if programmers can be regarded as interchangeable and dispensable, so that business types can fire or hire them as they see fit (firing people when the budget is tight, hiring them when they want a project to get done faster, ...).

using more standard technologies has the merit that it makes programmers easier to hire and fire whenever the higher ups feel it is needed (rather than worry if certain people have knowledge which would be difficult to replace, ...).


not that all this itself is necessarily ideal, but this is more of a "just the way it is" scenario, and one doesn't really benefit by rejecting certain groups of people or the way they do things just because one doesn't entirely agree with them.


Given this hurdle of perception, I have come to the conclusion that the only reasonable way to make advances is to live 
in the world of use case-driven design and measure the success of a language by how well it fits the perceived shape of 
the problem to be solved, looking for "familiarity" on the part of the user by means of keeping semantic 
distance between the language concepts and the problem concepts as small as possible.  To that end, 
"familiarity" in terms of "uses comfortably oldskool syntax/algorithm, etc" is vastly subordinate 
in importance to "the shape of the problem domain and domain-appropriate solution behaviors are instantly 
recognizable in the syntax and semantics of the language."

So use the syntax whose shape most becomes invisible to the reader and best 
exposes the shape of the domain it is representing.  Sometimes, this IS a 
traditional syntax.  In other cases, it's not because the traditional syntax is 
laboring under other burdens (not enough bracket characters in the graphology, 
for example) that cause it to ultimately do a poor job.

the problem is, however, that this risks reducing the "general" familiarity of the language (even if it happens to be closer to the problem domain).

the likely much better option (in terms of familiarity and acceptance) is to basically try to make everything as "generic" as can reasonably be done (reserving novelties mostly for cases where the powers of cost/benefit would seem to be in their favor).

the usual assumption is that the vast majority of developers will have no reason to know or care what is going on inside the implementation of the language they are using, so the compiler and VM is a black box, ...

nearly anything as justified internally almost regardless of how complex or how much of an "evil hack" it happens to be to make it work, as it is mostly about what the thing looks like on the outside which is what is important.


but, anyways, the brace issue is still likely to be fairly minor from a users' POV, and otherwise, languages like Java and C# have a similar issue regarding parsing generics.

for example, in C#, operators like ">>" and ">=" and so on, are not parsed as dedicated tokens, because of the concern that this could foul up parsing generics. instead, they are parsed as multiple tokens which need to be directly adjacent with no whitespace. likewise for "<[[" and "]]>". it seemed a better bet to parse them specially, rather than worry that in some edge case that dedicated tokens for them would lead to something somewhere else being misparsed.

but, the user doesn't need to care that they are not real tokens any more than the typical C# developer needs to care that his shift operators are not real tokens...

I am a bit more "on the fence" regarding generics. I have parser code for them, but am not fond of the syntax, and they aren't really strictly necessary at this point ("var x:Something<int, string>;"). I had used a different trick in parsing them though (token splitting), so didn't have to eliminate ">>" and so on from being tokens. as-is, generics don't do anything though, and aren't officially part of the language.

in any case, it is all a fairly minor issue vs the issues around parsing declarations and similar in C++.


or such...

_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to