On Oct 10, 2009, at 5:38 PM, David Mulder wrote:
One memorable example is a conversion point (in the form of a button-
looking
thing) that was sitting near the top of a longer-ish page. The Userfly
recordings I watched showed practically everyone, after they would
reach the
page, scrolling past the conversion point and not going back to it.
Before
we had conducted a formal user test, we were able to show our client
a major
problem point (they were impressed with the turnaround time for
feedback).
This finding allowed us to focus formal testing time on that page,
which I
thought was really cool.
That's interesting, but I have some questions.
First, how did you know that the sessions that Userfly caught (and you
ended up running) were people who should convert? Maybe the lead
generation system is attracting the wrong folks? Did you have a way to
collect any information about the people in your study?
Second, as Todd pointed out, why did you assume that the users wanted
to convert?
Third, wouldn't your analytics have told your that people weren't
converting?
Fourth, what did your subsequent formal usability test (it's not a
"user test", as the users aren't the ones being tested -- the design
is) tell you was happening? Did the participants in that test match in
the behaviors of the Userfly sessions?
Just curious is this is something more than a parlor trick to get
clients to pay attention and enthusiastically do better work. (I have
nothing against using parlor tricks. I just want us to be honest about
it when we're talking amongst ourselves.)
Jared
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... [email protected]
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help