Benoit, 

I'm old school in all this, so my ideas about a generic approach to things will 
likely seem old fashioned. 

My first stab at putting together a gesture based interface would be to use 
some sort of control overlay in the map area.  This is mostly to provide 
instant feedback to the users more than anything. 


Some important points (I think): 

** Symbols for zooming in (+) and out (-) that are right on the map it self. 
This might actually be considered a failsafe mode, where all other modes don't 
work for a device, and would use essentially existing capabilities as far as 
user controls go in a mapping environment.  Dragging seems to be a universal 
capability (so far), that can be implemented on (nearly) all devices. 

** Some sort of (fairly comprehensive) device type detection (or user setup 
control for a device type) will need to be in place. 

** With the building of a "failsafe" mode, and making it always available, the 
process of adding the gesture aspects seem to fall into the category of add-on. 
 As an aside here, with the advent of Netbook / Tablet devices becoming more 
prevalent, there may be some room here to add in some key-macro navigation 
tools as well as an optional navigation toolset, but this is likely only 
interesting to myself.  But if this type of approach of allowing for more than 
one type of navigation toolset were used, I think the different platform 
specifics can be more easily approached.   Am I talking about a navigation 
conduit here?  Something that is standardized in some form, that many different 
types of navigation tools could be developed against?  There are certain work 
processes that might require direct access to a navigation tool in a very 
specific manner for example, that a framework like this might help with during 
the development. 

** Associated with the navigation are layer choosers of some sort.  Displaying 
many layer option (or even many of anything) is a problem on a smaller form 
factor device.  I think this is an associated and just as important aspect of 
the gesture controls (for mapping) as is the navigation.  While most map based 
solutions may only present a few layers to the end user, I have systems in 
place that have hundreds of layers available, and it's very difficult to 
present this type of information to the typical user, let alone ones you might 
have the option of doing a little one on one training with. 


I would prefer to use a common set of code to accomplish this all with some 
sort of auto-detection of device on the client side.  The gestures themselves 
seem to be varied in their implementation across vendor products to one degree 
or another.  Is there room here for some sort of user settings/preferences 
(like in a desktop application) where the user can decide (based on the device 
capabilities) what gestures can be enabled. There could be defaults for known 
devices. 

bobb 




>>> Benoit Quartier <benoit.quart...@camptocamp.com> wrote:


Bob,


On Tue, Dec 21, 2010 at 5:40 PM, Bob Basques <bob.basq...@ci.stpaul.mn.us> 
wrote:



Benoit, 


I couldn't get the zooming to work at all on the N900. But I don't count that 
as a fault. 


I understand the complexities here, especially with regard to the multi-touch 
aspects vs single touch enabled devices. I think that in the near term the 
gesture aspects are going to NEED to be targeted at vendor specifics in order 
to take full advantage of each of them. Hopefully this will flesh out to a 
standard from the best available, but, in the near term, I'm interested in 
seeing a process that works for single touch (Could be all phone/mobile 
devices??) as a foundational chunk of coding. Hopefully this approach would get 
as many functional mobile devices accounted for as possible. Then it makes 
sense to attack the vendor specific (extra) capabilities. It seems to be easier 
to design for the masses where possible (from my experience), and then to 
enhance for the specialties, don't you think? 
 

Yes, I fully agree. That's why we didn't begin with Apple gestures. But as you 
wrote, in the near term, we will need to implement these vendor specifics 
gestures.

It would be great to have a set of single touch events that works on all (as 
much as possible) devices and, additionally, a set of vendor specific gesture. 
What would they be? Double tap to zoom in, triple tap to zoom out?







I think there are options available for addressing these ideas, if anyone else 
is interested. 
 

I am interested but I am not sure to understand what you mean? Could you please 
elaborate?


_______________________________________________
Dev mailing list
d...@lists.osgeo.org
http://lists.osgeo.org/mailman/listinfo/openlayers-dev

Reply via email to