I think I did it.

For reference, the article / tutorial:
https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1

This is the PL solution and it relies on my ext lib here:
https://bitbucket.org/hsarvell/ext/overview
----------------------
(load
   "lib.l"
   "ext.l"
   "ext/base.l"
   "ext/lst.l"
   "lib/math.l")

(import exlst~applyr)

(de colM (M Cn)
   (make
      (for Col M
         (link (car (nth Col Cn)))) ) )

## Transpose matrix: https://en.wikipedia.org/wiki/Transpose
(de trM (M)
   (make
      (for N (length (car M))    # Number of columns.
         (link (colM M N)) ) ) )

## Multiply matrix: https://en.wikipedia.org/wiki/Matrix_multiplication
(de mM @
   (let (Am (next) Bm (next))
      (let Rm
         (if2 (exlst~flat? Am) (exlst~flat? Bm)
            (sum * Am Bm)
            (make
               (for Br (trM Bm)
                  (link (sum * Am Br)) ) )
            (prog
               (println "Shape mismatch")
               (bye))
            (make
               (for Ar Am
                  (link
                     (make
                        (for Br (trM Bm)
                           (link (sum * Ar Br)) ) ) ) ) ) )
         (ifn (rest)
            Rm
            (nM Rm (next)) ) ) ) )

(de sigm (X)
   (*/ 1.0 1.0 (+ 1.0 (exp (- X)))) )

(de sigmd (X)
   (*/ X (- 1.0 X) 1.0))

(setq *Inputs '((0 0 1) (1 1 1) (1 0 1) (0 1 1)))
(setq *Outputs (trM '((0 1.0 1.0 0))))
#(setq *Weights (make (do 3 (link (list (- (* 2 (excmd~randNum 0 1.0))
1.0))))))
(setq *Weights '((-0.16595599) (0.444064899) (-0.99977125)))

(do 10000
   (let (output (applyr 'sigm (mM *Inputs *Weights))
           weightInc (applyr '((El1 El2) (*/ El1 El2 1.0)) (applyr '-
*Outputs output) (applyr 'sigmd output)) )
      (setq *Weights (applyr '+ *Weights (mM (trM *Inputs) weightInc))) ) )

(println (applyr 'sigm (mM '(1 0 0) *Weights)))

(bye)
----------------------


And here is the Python solution:
----------------------
from numpy import exp, array, random, dot

def sigm(x):
    return 1 / (1 + exp(-x))

def sigmd(x):
    return x * (1 - x)

training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]]).T
random.seed(1)
synaptic_weights = 2 * random.random((3, 1)) - 1

for iteration in xrange(10000):
    output = sigm(dot(training_set_inputs, synaptic_weights))
    synaptic_weights += dot(training_set_inputs.T, (training_set_outputs -
output) * sigmd(output))

print sigm(dot(array([1, 0, 0]), synaptic_weights))
----------------------

A lot is going on behind the scenes in the Python solution, sigm(matrix)
somehow gets translated to apply(sigm, matrix) behind the scenes, very
fancy.

Also m + m automatically understands that what to do because of opeator
overloading or some such.

So what do you think, is there a point in trying to take this AI / ML stuff
further in PL or should I / we just give up?

It feels like these type of things are just more suited for infix notation
as opposed to prefix notation or is that just literally decades of doing
math on paper with infix notation?

One thing though, the PL code is more explicit, it's easier for me to
understand what's going on whereas the Python stuff contains dizzying
amounts of magic.

Any thoughts?

Reply via email to