> At 10:27 PM 21/07/2016, Chris Maltby wrote:
> >The other audit capability is the (incomplete) counts of senate
> >first preferences by group that was conducted manually in polling
> >booths on election night. This data is available for statistical
> >comparison with the booth-by-booth final vote data and that would
> >also show up any significant favouritism in the data entry process. 

On Fri, Jul 22, 2016 at 07:51:00AM +1000, JanW wrote:
> I was going to suggest this as a QA measure: sample a subset of
> the votes in each machine to test accuracy. That wouldn't be too
> onerous and I suspect scrutineers would accept that as a reasonable
> demonstration of the reliability of the software.
> 
> In fact, I would push for this on every OCR/computer combination
> used for the final count. Anyone who has used it knows how OCR is
> UNreliable. If this is supposed to be interpreting the full spectrum
> of hand-written numbers, I would be questioning things as well.
>
> We're not talking about a binary tick or unticked box. Think how
> the US got into strife with the hanging chad fiasco in Florida and
> how Al Gore did not become president of the US as a result. This
> feels worse......

That idea has merit for picking up any systemic substitution of
images.

The unreliability of the OCR is catered for by two design features:
first, the OCR system has to have confidence that it made a match
for every mark on the paper (level unspecified), but any inconclusive
matches would cause the entire OCR to be rejected, and second, the
OCR is compared with a separate manual data entry of the same image
and any mismatch would escalate the image for an additional data
entry pass and possibly an examination by officials of the image or
the physical paper.

Chris
_______________________________________________
Link mailing list
Link@mailman.anu.edu.au
http://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to