Explanation: This is only vaguely SVG-ish but has to do with events, web-apps, 
interface and drawing. I thought the SVG-developers group would have 
significant expertise here, and possible interest. (I'm also not sure if a 
posting to [EMAIL PROTECTED] as a non-member would actually work or not -- and 
whether or not it would be welcome from outside).  It is long, so unless the 
topics in the subject line actually interest you, please don't worry about 
reading it.
I got an i-phone a couple of weeks ago and have registered as a higher 
education iphone developer-something-or-other. Have only started to unpack the 
developers' goodies. It looks like "hello world" is a significant fraction of a 
megabyte, so am not sure how quick I'll be to actually crawl into that mind set.
The reason I got it is that I teach courses at the senior level in an 
undergraduate CS/IT program in interface design. The idea of having multiple 
points of touch has always appealed to me. After working on the PLATO system as 
a grad student where we read lots of Minsky and Richard Karp, and getting to 
see the Xerox Star system in one of my prof's offices, I had begun to develop 
ideas about rich interfaces. My reaction to the first Mac I saw in '84 was 
"what only one mouse? I will clearly need two." [to sculpt anything three 
dimensional, for example]. The apple folks that I had access to at the time 
were goodhearted salesfolk who nodded and smiled. Finally, now,  as I approach 
my dotage, I might have the opportunity to program with more than one point of 
contact with the user.
And 3D accelerometers! What fun will that be? How might we use 3D 
accelerometers to carve our space into wonderful shapes? 
Now my friends at Opera have been telling me for more than a year now about all 
the wonderful stuff they have in Opera including the 3D canvas. I haven't had 
time to play, nor even to learn about it yet, but it can't be much more complex 
than abc or xyz now can it?
So, I guess as I try to help my university figure out whether or not its 
expense in buying me an iphone has paid off, I need to see if I can make the 
little gadget do anything. I have two ideas. 
1. Make it into a 3D mouse for an Opera 3D canvas.
2. Use it to make a gestural semantics that a) works better than a keyboard 
and/or b) works really well for folks already conversant with ASL.
Let me explicate a little and then see if anyone has either suggestions or a 
willingness to participate in such an endeavor.
1. One of the little apps that ships as an example (I still remember what 
wonderful things Sun shipped with JDK1.1) is just a little time-based plotter 
of acceleration data in x, y and z axes. Differentiate that curve twice, it 
seems and you've got locations in 3space (though the curve may be so 
discontinuous that the derivatives become screwy -- so then smooth the curve 
first). How fast can one stream that data, and through what protocol 
(HTTPRequest?) and port (80?) does the little i-phone actually send data? Can I 
plop it out quickly enough so that it could be streamed to a server and thence 
through ajax or json (or just plain old cgi-text) to a browser that I could 
draw into a 3D canvas running in Opera (or Safari or whoever else implements a 
3D canvas?) Or has anybody implemented enough of the  websockets stuff yet that 
we could do it that way, with presumably greater speed?
Some may say that I should just get a 3D mouse of some sort and hook it via 
infrared/microwave/radio directly to a device on which I can draw, but that's 
hardly as much fun and considerably lower level than I would like to play. 
Having it web-accessible means it can be broadcast and that's rather nice too. 
Though I know, there's always closed circuit TV, but again I don't really care 
about that.
The fundamental question is how can we get those data from a gadget (like an 
i-phone) to a web page quickly?
2. The keyboards on these little gadgets are way too small for someone who grew 
up decades before the generation of "texting" YMKWIM (you must know what I 
mean) While I appreciate the parsimony of highly fluent texting, I would rather 
learn Chinese or polish my Mongolian if I had the time.
One time, when I was a young psychology and math professor, I was invited to a 
party of people all of whom (except me) spoke ASL. I watched as three people 
all "talked" and understood one another all at the same time. I was amazed. 
They said it was commonplace. I knew of research suggesting that the baud rate 
of the visual system was considerably higher (like 100 fold) larger than that 
of the auditory system, though I also knew how some had disputed that research. 
But here was something remarkable. Three people speaking and listening 
So here's the idea: instead of typing with a tiny keyboard can we use the 3D 
accelerometers to convey fundamental units of meaning (such as, but better 
than, the alphabet)? Gesture(X+Y+)(X-Z+)(Y+) = "generic action" ; 
Gesture(X+Y+Z-)(X-) = "nominalizer" ; etc. 
A collection of suitable semantic primitives (notwithstanding my negative 
results (http://portal.acm.org/citation.cfm?id=12453.12456&coll=GUIDE&dl=GUIDE) 
is discussed in a beginning explanation of work I am dusting off after 30 years 
that it lay dormant -- see the "Meaning" topic and link at 
http://srufaculty.sru.edu/david.dailey/words/ . 
Or we may prefer an alphabet: Gesture(X+Y+)(X-Z+)(Y+) = "A" ; 
Gesture(X+Y+)(X+Z-)(Y+) = "B"
Whether we semanticize using a set of semantic primitives, or using ASL or 
using an alphabet, the question is still the same: can we define a gestural 
semantic language that works as efficiently as a keyboard?
For some years I have argued (mostly for fun, but somewhat for real)  that the 
development of speech was not really an advance for humans -- that moving from 
gesture to grunting may have freed our hands to pick vegetables as 
sharecroppers in some would-be pharaoh's grand vision of the future, but 
overall it is not necessarily the quickest way to get ideas across. The later 
transition to the written word, whereupon many languages merely alphabetized 
speech was just a way of entrenching a questionnable first step. The transition 
from writing to printing and then to HTML helped, but mainly because of 
liberalization in modes of distribution.*  If we move back to the roots of 
communication, then maybe gestural semantics, is, in the grand scheme, a faster 
way of getting ideas from mind to mind. And so long as our display devices and 
visual systems are primarily 2D, then that's where SVG comes in.
At any rate, ideas 1 and 2 above seem worthy of some real work and I'm 
wondering if anybody might want to help write a grant or fund a grant or 
volunteer some time or sign on as a paid worker on a grant or point toward ways 
of actually doing any of the mechanics of talking from one of these gadgets to 
a web app?  In the meantime, does the W3C have to worry about standardizing 3-D 
accelerometer events or stream protocols? I suppose we have a sample frame 
rate, a smoothing function (possibly invertable?), and ranges/min-max 
normalized across x y and z, and then the xyz triplets? Is that the sort of 
stuff the webapps group does?
* Though printing gave access to embedded engravings and that was a clear step 
forward, and certainly the "hyper" in hypertext was actually an advance, by 
fundamentally recognizing the extra-linear nature of both thought and 
expression. (Ted Nelson's Xanadu project was actually pretty cool)


To unsubscribe send a message to: [EMAIL PROTECTED]
visit http://groups.yahoo.com/group/svg-developers and click "edit my 
----Yahoo! Groups Links

<*> To visit your group on the web, go to:

<*> Your email settings:
    Individual Email | Traditional

<*> To change settings online go to:
    (Yahoo! ID required)

<*> To change settings via email:
    mailto:[EMAIL PROTECTED] 
    mailto:[EMAIL PROTECTED]

<*> To unsubscribe from this group, send an email to:

<*> Your use of Yahoo! Groups is subject to:

Reply via email to