This does not really address what you're asking for, but rather what you're asking for reminds me of something else:

several years ago I used to ask my students to program, as exercises, various useful widgets in HTML/JavaScript (color pickers, alarm clocks, zoomers and panners and sliders and so forth) -- the sorts of things that were not really standardized yet but which might be useful in applications of all sorts running in some WWW-like network. Fortunately I now hold patents on all of these (just kidding! but I pleased that my prior art invalidates new patent claims that might arise).

One of the exercises was to develop a slider in which an ordinal discret entry k < n could be chosen as quickly as possible for large integer n. The familiar examples I gave were the digital timer on a stove (if you keep your finger down long enough it accelerates from 1 min/sec of change to 10 min/sec of change. Can we make some sort of slider in which the movement of the mouse allows the construction of the numbers 162537, 2, or 791 quicker than it would take to type that number (or at least most) by hand. The detection of both duration of the mousedown event as well as subtle changes in its positioning, I argued, could be used to do just that. None of the students (all undergrads) ever accomplished quite what I had in mind, so I've de-elevated (or elevated) the thing to extra credit status. The problem is sort of similar, how best to enable a user to select from a large number of possibilities. In the case of certain objects, like magnitudes and numbers, there is a natural and intuitive ordering of options, such that some loglinear interface to an accelerometer allows (in theory) linear-time access to 2^n options. Likewise in the case of alphabetization of western-language style strings: relying on the mapping that people presumably internalize between the alphabet and the line, a slider (like a selection list) maps n options onto points on a line. The irregularities along that line posed by discontinuity of actual words, is arguably negligible. When it comes to semantically discrete categories, the prospect of speeding users' ability to choose from among N options, depends on either a) the division of those N options into logN intuitive categories (such was the goal of Dewey and L of Congress classifications) -- think of cascading submenus in which logN branches in a tree ultimately accesses any of the N leaves, or in the ability to somehow map those N options onto a linear (or multidimensional) manifold that intuitively maps the space. The problem in either case is in customizing our interface to people's intuition about the Semantic proximity (in the original sense of the term semantic in linguistics as differentiated from the newfangled spec-speak usage of the term) of the various options presented.

You've suggested an approach which could involve server side intervention: as options are more frequently chosen (across multiple users), those options eventually float closer to the top, and the spec, if responsive to your suggestion, would presumably allow the author to choreograph this client-server interaction. Assuming that frequency of choices tends to obey some sort of Zipf's law, then the top 4/n items would account for k(1/2+1/4+1/8+1/16) of the options chosen for some k, hence simplifying life for lots of people. I'm not sure Zipf's law though would actually apply to most things though. Population across geopolitical districts (states and countries) is probably distributed more like a Bell curve than like word frequencies ("the" "of" "a"), so the advantage may not be so pronounced as we might hope.

But as long as we're open to the kind of approach, that allows the author to choose some soft of server-optimized ordering then it seems like broader possibilities arise. Telling the server to perform a principle components analysis based on some lexico-semantic criterion and then to map the options in a 2D plot, or to find the most intuitive taxonomy (based on method A B or C) for our options would also become possible. Even in the simplest case: maximize users' selection speed for a large set of contiguous integers, the situation becomes rather fun. So fun, in fact, that it would be worth playing with.

Again, much of this is rather tangential to your idea, but the idea is just intriguing enough to provoke thought.

cheers,
David Dailey

----- Original Message ----- From: "Csaba Gabor" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, July 15, 2008 8:14 AM
Subject: [whatwg] Select element suggestion


I know it is rather late in the game, but there is an attribute that would be immensely useful for SELECT elements. It is motivated by a desire to make web page interaction more efficient.

It is frequently the case that SELECT elements of size 1 (drop downs) are quite long, requiring scrolling to reach most of the options. For example:
Year a person was born in
States of the US
Countries of the world
Height (in inches or cm)

This necessitates a minimum of 4 actions for most of the options (click to pull down the select, click on scroll bar, drag scroll bar, click desired option) vs. 2 actions (the first and last) for those options near the top. Clearly the time difference is significant, especially when the same select element must be selected on repeated visits. Furthermore, the overwhelming liklihood is that only a few of the many options will ever be of interest to any individual.

Therefore, it makes sense to float those values to the top of the select element in a reasonable way. What's reasonable? I would like to suggest:
frequencyLimit=percent


<SELECT name=states frequencyLimit=16>
<option>Alabama</option>
<option>Alaska</option>
<option>Arizona</option>
<option>Arkansas</option>
...
<option>Wyoming</option>
</select>

This would say that if any element is selected 16% or more of the time, it floats to the top part of the select element. It would apply to any select element of the same name and frequencyLimit at [for efficiency reasons] the current page's directory level or lower.

Specifically, a reasonable UA implementation would be:
For each select element of a minimum size (say 9 elements) with frequencyLimit set, compute a checksum of the select element (e.g. md5) and if that matches the previous value, then frequency analysis should be done (otherwise, start with a fresh slate). The initial position (for reinsertion purposes) of each option and frequency count of each option is stored.

If the frequency of an option being selected reaches the threshhold, it is moved to the top portion of the select element. If the frequency drops below this threshhold, it is moved back to its original position.

This is clean, backwards compatible, and offers a clear advantage in usability, especially for impaired users. It is something that could be implemented by javascript on individual web pages, but it makes more sense to have a uniform approach.

A few comments/pitfalls:
1. If frequencyLimit=p, this does not float the Math.floor(100/p) most frequently selected options to the top. At most that many will get floated to the top. For example, with frequencyLimit=33, if Oregon and New York are both selected 40% of the time, while ILlinois and California are selected 10% of the time, only Oregon and New York will float to the top since only they exceed 33%.

To consider a second example in this light, frequencyLimit=5 might seeem to mean that one can expect 20 elements to float to the top, but it is extremely unlikely that there would be such uniform distribution of selections. It's far more likely that fewer than 10 elements float to the top.

One could put lomething like frequencyLimit='.2', effectively floating anything that is selected to the top. However, this would create two separate lists within the select, likely leading to confusion. If M is the maximum number of options that can be seen at any one time, probably the frequencyLimit should ensure that no more than M options get floated to the top.

2. Of course this proposal represents a demand on the UA to cache the information across sessions as with cookies, so it makes sense for the UA to limit the number of such cached select elements under any particular domain/directory. In addition, there is some exposure of privacy, so these saved values should perhaps live and die with corresponding cookie information.

3. As outlined above, the scheme does not work for dynamically populated select elements. However, in those situations, frequencyLimit should not be set a priori (in the HTML); rather, it should be set after the select element is populated. At this point, the UA would then compare the select element against any cached image (ie. checksum). This also provides a mechanism for the server to clear out cached information, by setting frequencyLimit on an empty select element.

4. Everything can apply just as well to multiselect elements and to elements of size greater than 1.

5. One could envision only using the last n selections (where n is say 100) for analysis, but this puts an additional implementation burden on the UA.


Csaba Gabor from Vienna







Reply via email to