>> we are more and more on the side of "usage analytics".
>> While I do not know the statistics module, I think a homogenization and 
>> up-scaling of the activity-stream together with the statistics module would 
>> be really useful and would support the installers or admin into raising 
>> usability.
> 
> For the AS it's a given since it's unmaintainable as it is (written in wiki 
> pages) and will be rewritten in Java, taking into account some not currently 
> implemented use cases.
> 
> For the stats, yes it would be a good idea too. First we have to assess 
> performances of the stats since a big goal is to make XWiki faster and I've 
> heard plenty of times that stats are disabled because of performance issues 
> so we need to measure that first.

Yup, same impression here.
I think moving to another storage engine for stats is required for a 
"guaranteed" scalability.

> 
>> Maybe also spreading the practice of usability testing would be really 
>> useful.
> 
> Sure, if you have some process (especially automated) to do that, it would be 
> great.

This cannot be automated, it needs humans to craft useful collections, queries, 
and evaluate the search results.

> From a general POV we're already doing some kind of usability testing by 
> having xwiki open source and releasing often and having users discuss issues 
> on our mailing lists/jira.

I know but this does not employ processes of testing and if these processes 
could be easily reproduced by others, XWiki we'd be gaining a lot of usability 
for precise applicatons.

> But if you have specific ideas, please shoot, so that we can discuss them :)

I can suggest precision and recall testing at first. An example run with 
reference to more literature is here:
  
http://direct.hoplahup.net/tmp/Math-Bridge-Evaluations/Test-suite-guidelinesMath-Bridge.pdf

in general, this involves the users to run in some sandboxed environment (e.g. 
limiting the amount of documents available), to perform a documented sequence 
of actions, and report their results back from the UI of their actions (e.g. 
using decorating checkboxes to indicate a correct occurrence or a text-field to 
request suggestion for supplementary content). This is generally run by domain 
experts (in this case, it was math teachers, in other cases it would be the 
company department's document specialist?) which both describe their sandbox 
first (in communication to the wiki admin) and perform a test sequence agreed 
upon and provide reports that is sufficiently relevant for developers (local 
application developers?) to see the situation in which the tester was, and 
grasp what needs to be changed.
Often, as is the case in precision and recall, there is a way to summarize the 
"quality" as one number, which is always beloved by managements...

Some of that can be automated but only in the very end of the process when the 
platform does not change too much and the tests are somewhat stable.
Some of that can also feed into unit testing... with some developers work.

paul
_______________________________________________
devs mailing list
[email protected]
http://lists.xwiki.org/mailman/listinfo/devs

Reply via email to