> On Nov 13, 2022, at 9:09 AM, Paul Koning via cctalk <cctalk@classiccmp.org> 
> wrote:
> 
> [EXTERNAL EMAIL]
> 
> 
> 
>> On Nov 12, 2022, at 1:08 PM, Anders Nelson via cctalk 
>> <cctalk@classiccmp.org> wrote:
>> 
>> I bet NN/AI would be helpful with data recovery - if we can model certain
>> common failure modes with those old drive heads we could infer what the
>> data should have been...
> 
> NN maybe, I need to understand those better.  I see they are now a building 
> block for OCR.
> 
> AI, not so clear.  In my view, AI is a catch-all term for "software whose 
> properties are unknown and probably unknowable".  A computer, including one 
> that executes AI softwware, is a math processing engine, so in principle its 
> behavior is fully defined by its design and by the software in it.  But when 
> you do AI in which "learning" is part of the scheme, the resulting behavior 
> is in fact unknown and undefined.  
> 
> For some applications that may be ok.  OCR doesn't suffer materially from 
> occasional random errors, since it has errors anyway from the nature of its 
> input.  But, for example, I shudder at the notion of AI in safety-critical 
> applications (like autopilots for aircraft, or worse yet for cars).  A safety 
> critical application implemented in a manner that precludes the existence of 
> a specification is a fundanmentally insane notion.
> 
>       paul
> 

Paul,
        not a fan of AI myself. But, I feel constrained to point out that the 
alterative to "AI in safety-critical applications” often is “a minimum-wage 
employee in a safety-critical application” which may or may not be an 
improvement. Agreed that AI is fundamentally not absolutely predictable - but 
neither are people. For problems complex enough to require either in a 
safety-critical decision-making loop, it may resolve down to a question of 
either 1) trusting the statistics (AI driving is maybe already *statistically* 
safer than human driving), 2) desiging the whole system in such a manner as to 
be tolerant of decision-making faults, or 3) Not doing the dangerous activity 
because it’s not monitorable.
        I would say our current road and automobile system doesn’t satisfy any 
of those criteria, FWIW.
        For problems simple enough to write closed-form, formally-verifiable 
software to handle, I *definitely* agree that is the way to go. 
                                        - Mark

Reply via email to