[CODE4LIB] A/B Testing Catalogs and Such

2011-01-26 Thread Sean Moore
There's a lot of resistance in my institution to A/B or multivariate testing
any of our live production properties (catalog, website, etc...).  I've
espoused the virtues of having hard data to back up user activity (if I hear
one more well, in my opinion, I'll just go blind), but the reply is always
along the lines of, But it will confuse users!  I've pointed out the
myriad successful and critical business that use these methodologies, but
was told that businesses and academia are different.

So, my question to you is, which of you academic libraries are using A/B
testing; on what potion of your web properties (catalog, discovery
interface, website, etc...); and I suppose to spark conversation, which
testing suite are you using (Google Website Optimizer, Visual Website
Optimizer, a home-rolled non-hosted solution)?

I was told if I can prove it's a commonly accepted practice, I can move
forward.  So help a guy out, and save me from having to read another survey
of 12 undergrads that is proof positive of changes I need to make.

Thanks!

*Sean Moore*
Web Application Programmer
*Phone*: (504) 314-7784
*Email*:  cmoo...@tulane.edu

Howard-Tilton Memorial Library http://library.tulane.edu, Tulane
University


Re: [CODE4LIB] A/B Testing Catalogs and Such

2011-01-26 Thread Bill Dueber
I've proposed A/B testing for our OPAC. I managed to avoid the torches, but
the pitchforks...youch!

On Wed, Jan 26, 2011 at 5:55 PM, Sean Moore thedreadpirates...@gmail.comwrote:

 There's a lot of resistance in my institution to A/B or multivariate
 testing
 any of our live production properties (catalog, website, etc...).  I've
 espoused the virtues of having hard data to back up user activity (if I
 hear
 one more well, in my opinion, I'll just go blind), but the reply is
 always
 along the lines of, But it will confuse users!  I've pointed out the
 myriad successful and critical business that use these methodologies, but
 was told that businesses and academia are different.

 So, my question to you is, which of you academic libraries are using A/B
 testing; on what potion of your web properties (catalog, discovery
 interface, website, etc...); and I suppose to spark conversation, which
 testing suite are you using (Google Website Optimizer, Visual Website
 Optimizer, a home-rolled non-hosted solution)?

 I was told if I can prove it's a commonly accepted practice, I can move
 forward.  So help a guy out, and save me from having to read another survey
 of 12 undergrads that is proof positive of changes I need to make.

 Thanks!

 *Sean Moore*
 Web Application Programmer
 *Phone*: (504) 314-7784
 *Email*:  cmoo...@tulane.edu

 Howard-Tilton Memorial Library http://library.tulane.edu, Tulane
 University




-- 
Bill Dueber
Library Systems Programmer
University of Michigan Library


Re: [CODE4LIB] A/B Testing Catalogs and Such

2011-01-26 Thread Louis St-Amour
As Nielsen says in a 2005 Alertbox column, Putting A/B Testing in Its
Placehttp://www.useit.com/alertbox/20050815.html,
a large part of why A/B testing is successful or not comes down to the
metric you're measuring to define that success. If there is no agreement on
the value of that metric, it's unlikely to get adopted...

Examples, however, of where it might work: Consider the suggestion to *add*
a second link, or move a secondary link, to check account information on the
homepage. If more people click that link (as measured via referrals or
JavaScript) than use slower methods to get to the same page, you could win.
If you time how long people take before they get to the account page, and
it's shorter, then you've another win. But this assumes there's always an
alternative to the change that's nearby, or that the user could ignore it if
they choose. (E.g. color, left-or-right, spacing, order/ranking, image...)

If not, I suggest restricting changes to a smaller subset -- for instance,
those logged in to your site. Show certain accounts one thing, and other
accounts another. This is especially useful for application functionality
changes. You could even pull a Google Labs, if large enough, and allow
people to opt-in to certain changes. Placing [x] buttons or collapsable
heading arrows serves a similar purpose, and can tell you what features are
used or ignored and when, if tracked.

Ultimately, A/B testing, outside of advertising conversions, can be rather
limited however, unless used in conjunction with other kinds of analysis and
remote or local testing. Consider running a survey where participants opt-in
to your questions and changes, so they know to expect things to shift around.
People like surprises, if they ask for them and feel included in the end
result. It's why people (like myself) run beta or dev copies of Google
Chrome, so they can try the latest features and forgive you if things break.

*Highly recommended reading:* (which I've bought as DRM-free PDFs and you
can too!)

1. Driving Technical Change: Why People On Your Team Don't Act on Good
Ideas, and How To Convince Them They
Shouldhttp://www.terrenceryan.com/blog/post.cfm/i-m-an-author-driving-technical-changeby
Terrence Ryan (The Pragmatic Programmers)

2. Clout: *the ART and SCIENCE of INFLUENTIAL WEB
CONTENT*http://content-science.com/expertise/clout-the-bookby
Colleen Jones (New Riders)

3. Remote Research: Real Users, Real Time, Real
Researchhttp://rosenfeldmedia.com/books/remote-research/by Nate Bolt
 Tony Tulathimutte.

However, don't expect much on A/B testing in any of these. As the third book
says on page 127, it puts A/B testing (or what the nerds like to call
multivariate testing these days) in a context of how to design your
research, what tasks to perform, etc. For math, it recommends two books:
*
*
*[By math, it refers to number-crunchy regression/conjoint/factor/quadrant
analysis, optimization, or any of the multivariate stuff you learned in AP
Calc.]*
*
*
*Measuring the User Experience* by Tom Tullis and Bill Albert (Morgan
Kaufmann Publishers) In this book you'll learn all about the kind of data
analysis we sheepishly avoid.
http://measuringuserexperience.com/

The other book, which goes into fine-grained, advanced automated research
techniques is called *Beyond the Usability Lab: Conducting Large-Scale User
Experience Studies* by Bill Albert, Tom Tullis, and Donna Tedesco (also
tech. ed. for Remote Research).
http://www.beyondtheusabilitylab.com/

Both of the above websites together have a ton of resources and examples.
Ultimately, however, it's up to you to influence people and processes to
improve both content and design in measurable ways. It all seems to come
down to politics regardless. Good luck!


Louis.

On Wed, Jan 26, 2011 at 6:09 PM, Bill Dueber b...@dueber.com wrote:

 I've proposed A/B testing for our OPAC. I managed to avoid the torches, but
 the pitchforks...youch!

 On Wed, Jan 26, 2011 at 5:55 PM, Sean Moore thedreadpirates...@gmail.com
 wrote:

  There's a lot of resistance in my institution to A/B or multivariate
  testing
  any of our live production properties (catalog, website, etc...).  I've
  espoused the virtues of having hard data to back up user activity (if I
  hear
  one more well, in my opinion, I'll just go blind), but the reply is
  always
  along the lines of, But it will confuse users!  I've pointed out the
  myriad successful and critical business that use these methodologies, but
  was told that businesses and academia are different.
 
  So, my question to you is, which of you academic libraries are using A/B
  testing; on what potion of your web properties (catalog, discovery
  interface, website, etc...); and I suppose to spark conversation, which
  testing suite are you using (Google Website Optimizer, Visual Website
  Optimizer, a home-rolled non-hosted solution)?
 
  I was told if I can prove it's a commonly accepted practice, I can move
  forward.  So help a guy out, and save me