I generally agree with your analysis Marc, notwithstanding that there is
blame to share on all sides - not just users who point to broken edge
cases. The (quite predictable) behaviour you mention is why I was quite
fond of the way the "usability initiative" from several years ago (the team
that built the Vector skin) operated.

They made it very easy to opt-in AND out of the beta version of their work.
They also made it very clear that when the proportion of people who stayed
"in" reached a certain level (If I recall correctly it was 80% and
certainly not 95%+), that would signal that the software had reached a
certain level of acceptance, and that a sufficient number of edge-cases had
been addressed. Until that point they would focus on working with the
people who has tried the beta but had turned it OFF, to identify their
personal edge cases - in the hope this would fix other people's problems.
Much like you described.

The key here in my opinion is:
- clear communication about what state constitutes "success" (e.g. "When
80% of people who have opted in have STAYED opted-in")
- clear communication about the progress towards that state (e.g. Showing
the "success" factor in the little statistics on the "beta features" tab,
and showing how close we are to that.)
- only moving to the next stage when that state has been reached, (not a
fixed schedule but "it is ready when it is ready, not before")
- making it easy to try, and withdraw from, new things, always starting
with opt-in before making it opt-out.

Since the "usability initiative" there has now been introduced the "beta"
preferences function for editors to test new software and provide feedback.
I would like to see this used more consistently and with more clarity about
what stage of "beta" the individual elements within that system are at.
Currently all I know is that there is a list of items and that there is an
absolute number of users for each item (e.g. "1,234 people are using
this.") Would it be difficult to make it show what proportion of people who
have tried any given beta feature are STILL using it? That would give an
indicator of popularity/acceptance at least.

-Liam / Wittylama.

On Tuesday, 2 September 2014, Marc A. Pelletier <m...@uberbox.org> wrote:

> On 09/02/2014 02:52 AM, Yann Forget wrote:
> > OK, I could buy that [fixing image pages]. But then why not
> > fixing that *first*, so that
> > any MV implementation coming afterwards would be smooth?
> In the best of worlds, that would have been ideal.
> Now, no doubt I'm going to be branded a cynic for this, but have you
> ever /tried/ to standardize something on a project?  Obviously, my frame
> of reference is the English Wikipedia and not Commons; but in a world
> where there exists at least six distinct templates whose primary
> function is to transclude a single "<references/>" onto a page and where
> any attempt to standardize to one of them unfailingly results in edit
> wars, that doesn't seem like a plausible scenario.
> Perhaps the problem is more fundamental than this, and we're only seeing
> symptoms. I don't know.
> But I /do/ know that waiting until every edge case is handled before
> deploying (attempted) improvements to the site is doomed to failure.  If
> only because most of the edge case won't even be /findable/ until the
> software is in place so that it can't work even in principle.
> IMO, in practice, "get it working for the general case and most of the
> obvious edge cases" is a reasonable standard; and I'm pretty sure that
> MV qualified under that metric (and VE didn't).
> I suppose much of my frustration over the MV keruffle is borne out of a
> reaction I see much too often for my taste: editors yelling "OMG, look,
> image X isn't properly attributed/licensed/etc in MV!  Burn it with
> fire!!!" rather than figuring out why X's image page isn't properly
> parsed and /fixing/ it (and possibly an underlying template that could
> fix dozen/hundred others in one fell swoop).I'm pretty sure that if half
> as much effort had been spent fixing issues as was attempting to kill
> MV, its fail rate would already be at "statistical anomaly" levels.
> .. but my inner cynic is also pretty sure that many of the loudest
> voices wanting to get rid of MV ostensibly because of its failings don't
> actually /want/ those failings to be fixed because being able to say
> "It's broken" rather than "I don't like it" sounds much more rational.
> -- Marc
> _______________________________________________
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> <mailto:wikimedia-l-requ...@lists.wikimedia.org <javascript:;>
> ?subject=unsubscribe>

Peace, love & metadata
Wikimedia-l mailing list, guidelines at: 
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 

Reply via email to