Hi Peter, This sounds like a really interesting project. Are you planning to adapt the information being displayed on the screen based on the satisfaction of the user? I think that could be very risky unless you get a really high reliability in your analysis as automatically changing what is displayed can be frustrating if you don't want it to change.
I would probably not use pupil size data in this analysis as it depends on so many different factors like lighting conditions, the brightness of the stimulus and arousal (which in turn can be affected by numerous things). Unless you run this system in a very controlled environment I think you would have a really hard time getting accurate results. This also touches on the key problem that you face; you say that you don't want to take into account what content is being shown to the user. It will be hard to determine the user's state of mind based on the gaze pattern as images in different layouts produces gaze patterns that differs a lot from reading patterns, even though the images can be just what the user was looking for. Reading detection is probably fairly simple but is only useful if all content is text based. Will you be able to use information on the navigation structure in the algorithm? Best regards, Joakim Isaksson ---------------------------------------------------------------- Joakim Isaksson Support Engineer E-mail: [EMAIL PROTECTED] Web: www.tobii.com ________________________________________________________________ *Come to IxDA Interaction08 | Savannah* February 8-10, 2008 in Savannah, GA, USA Register today: http://interaction08.ixda.org/ ________________________________________________________________ Welcome to the Interaction Design Association (IxDA)! To post to this list ....... [EMAIL PROTECTED] Unsubscribe ................ http://www.ixda.org/unsubscribe List Guidelines ............ http://www.ixda.org/guidelines List Help .................. http://www.ixda.org/help
