On 9/03/2024 3:35 pm, Roger Clarke wrote:
On 9/3/2024 13:33, David wrote:
Personally, I first came across neural networks in the late 60's when my 
Supervisor at the time was experimenting with them on a very slow 
common-or-garden engineering computer.  But we could still see the model 
learning
Where 'the model learning' =
'model-parameters being adjusted by software on the basis of pre-defined 
aspects of the data-inputs'

I don't want to play down the significance, because it was indeed a 
generational change in the mode of software development. But it helps to remain 
balanced about artefacts' capabilities when anthropomorphic terms are avoided.

The neural-network model I'm talking about was a pretty basic one (it was the late-1960's 
after all!) which adapted to recognising basic shapes from "fuzzy" data.  It 
embodied some principles of the wetware between our ears such as sensors (nerve endings) 
and threshold-logic (synaptic action potentials), but it had nothing to do with software 
development.

If anything, it may have been more akin to the small neural network in our eyes 
which pre-processes images, so that the number of nerve fibres going to the 
brain is only around 25% (from memory?) of the number of retinal sensors.  I 
guess that improves the S/N ratio (:-).

Believe me, I'm very aware of the dangers inherent in using anthropomorphic terms in any discussion of 
AI.  The very expression "artificial intelligence" is misleading to begin with, and probably 
leads many people to ascribe cognitive powers to such systems which they simply do not possess.  It's 
easy to see how this might lead to accidents involving "self-driving" cars and more 
spectacular & disastrous mis-applications of AI such as the fictional one foreseen by Arthur C. 
Clarke.

I wrote in 1990-91, in 'A Contingency Approach to the Application Software 
Generations', in s.8 (The Application Software Generations as Levels of 
Abstraction), at:
http://www.rogerclarke.com/SOS/SwareGenns.html#ASGLA

The shape of at least one further generation is emerging from the mists. 
Connectionist or neural machines, whether implemented in software or using 
massively parallel hardware architectures, involve a conception of knowledge 
yet more abstract than knowledge-bases containing production rules.

[...]

30 years later, I say it a little differently from that.  But that did manage 
to build in the notions of (merely) empirical, abdication of responsibility / 
decision factory [i.e. decision system, not decision support system], and 
maintenance operative not teacher.

But in the late 60s, I was very prosaically writing a little Fortran (before it 
even had version-numbers) and was shortly going to embark on writing rather 
more code in that deeply intellectual language, COBOL.  I don't think I heard 
of neural networks until a *long* time after that.

Fortunately, I was never required to write COBOL, but I did do some useful 
stuff in Fortran IV at dear old AWA Research Labs - my first real job.  And 
Fortran is still going, though it's now evolved to a modern language supporting 
reentrant code, patterns, object-oriented code, etc.

For Christmas, my kids, ever-desperate to avoid resorting to socks or 
handkerchiefs, gave me a T-shirt with these words emblazoned on it:

          'I'm sorry Dave.  I'm afraid I can't do that.'

I'd like to locate a more dressy T-shirt showing a "Wanted Dead & Alive" poster 
for Schroedinger's Cat!

_David Lochrin_
_______________________________________________
Link mailing list
[email protected]
https://mailman.anu.edu.au/mailman/listinfo/link

Reply via email to