Re: Math exp issue?
I've got this currently: ## See http://rosettacode.org/wiki/Matrix_transposition#PicoLisp (de trM (M) (apply mapcar M list) ) ## According to https://en.wikipedia.org/wiki/Matrix_multiplication ## Handles all scenarios just like Python's nympy's dot(). (de mM @ (let (Am (next) Bm (next) Mrows '((Ar) (make (for Br (trM Bm) (link (sum */ Ar Br (1.0 .))) ) ) ) ) (let Rm (if2 (exlst~flat? Am) (exlst~flat? Bm) ## Both are flat so we just multiply them and sum, ## the return will be a number, see https://en.wikipedia.org/wiki/Dot_product (sum * Am Bm) ## The A matrix is flat, the B matrix is not so we loop the subs of the B and multiply with A. (Mrows Am) ## B is flat, A is not, doesn't work. (prog (println "Shape mismatch") (bye) ) ## They are both multidimensional so we loop both. (make (for Ar Am (link (Mrows Ar) ) ) ) ) (ifn (rest) Rm (nM Rm (next)) ) ) ) ) I didn't see much benefit in using the Rosetta version for matrix multiplication but nice touch there with the sum */ and (1.0 .), that was definitely needed. Will try and translate this one from python to PL now: https://medium.com/technology-invention-and-more/how-to-build-a-multi-layered-neural-network-in-python-53ec3d1d326a On Wed, Feb 21, 2018 at 7:55 PM, Henrik Sarvellwrote: > Thanks Alex, > > I'll try them out, and modify multiply to handle an arbitrary amount of > matrices. > > On Wed, Feb 21, 2018 at 11:36 AM, Alexander Burger > wrote: > >> Hi Henrik, >> >> > For reference, the article / tutorial: >> > https://medium.com/technology-invention-and-more/how-to-buil >> d-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1 >> >> Nice! >> >> >> > (de colM (M Cn) >> >(make >> > (for Col M >> > (link (car (nth Col Cn ) ) >> > >> > ## Transpose matrix: https://en.wikipedia.org/wiki/Transpose >> > (de trM (M) >> >(make >> > (for N (length (car M))# Number of columns. >> > (link (colM M N)) ) ) ) >> >> Note that there are some matrix manipulation tasks in RosettaCode. >> >> For example, the matrix transposition is simply: >> >>(de matTrans (Mat) >> (apply mapcar Mat list) ) >> >> >> Matrix multiplication (for 2 matrices) in RosettaCode is >> >>(de matMul (Mat1 Mat2) >> (mapcar >> '((Row) >> (apply mapcar Mat2 >>'(@ (sum */ Row (rest) (1.0 .))) ) ) >> Mat1 ) ) >> >> Not sure if this fits your needs ... >> >> ♪♫ Alex >> >> -- >> UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe >> > >
Re: Math exp issue?
Thanks Alex, I'll try them out, and modify multiply to handle an arbitrary amount of matrices. On Wed, Feb 21, 2018 at 11:36 AM, Alexander Burgerwrote: > Hi Henrik, > > > For reference, the article / tutorial: > > https://medium.com/technology-invention-and-more/how-to- > build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1 > > Nice! > > > > (de colM (M Cn) > >(make > > (for Col M > > (link (car (nth Col Cn ) ) > > > > ## Transpose matrix: https://en.wikipedia.org/wiki/Transpose > > (de trM (M) > >(make > > (for N (length (car M))# Number of columns. > > (link (colM M N)) ) ) ) > > Note that there are some matrix manipulation tasks in RosettaCode. > > For example, the matrix transposition is simply: > >(de matTrans (Mat) > (apply mapcar Mat list) ) > > > Matrix multiplication (for 2 matrices) in RosettaCode is > >(de matMul (Mat1 Mat2) > (mapcar > '((Row) > (apply mapcar Mat2 >'(@ (sum */ Row (rest) (1.0 .))) ) ) > Mat1 ) ) > > Not sure if this fits your needs ... > > ♪♫ Alex > > -- > UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe >
Re: Math exp issue?
Hi Henrik, > For reference, the article / tutorial: > https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1 Nice! > (de colM (M Cn) >(make > (for Col M > (link (car (nth Col Cn ) ) > > ## Transpose matrix: https://en.wikipedia.org/wiki/Transpose > (de trM (M) >(make > (for N (length (car M))# Number of columns. > (link (colM M N)) ) ) ) Note that there are some matrix manipulation tasks in RosettaCode. For example, the matrix transposition is simply: (de matTrans (Mat) (apply mapcar Mat list) ) Matrix multiplication (for 2 matrices) in RosettaCode is (de matMul (Mat1 Mat2) (mapcar '((Row) (apply mapcar Mat2 '(@ (sum */ Row (rest) (1.0 .))) ) ) Mat1 ) ) Not sure if this fits your needs ... ♪♫ Alex -- UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
Re: Math exp issue?
I think I did it. For reference, the article / tutorial: https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1 This is the PL solution and it relies on my ext lib here: https://bitbucket.org/hsarvell/ext/overview -- (load "lib.l" "ext.l" "ext/base.l" "ext/lst.l" "lib/math.l") (import exlst~applyr) (de colM (M Cn) (make (for Col M (link (car (nth Col Cn ) ) ## Transpose matrix: https://en.wikipedia.org/wiki/Transpose (de trM (M) (make (for N (length (car M))# Number of columns. (link (colM M N)) ) ) ) ## Multiply matrix: https://en.wikipedia.org/wiki/Matrix_multiplication (de mM @ (let (Am (next) Bm (next)) (let Rm (if2 (exlst~flat? Am) (exlst~flat? Bm) (sum * Am Bm) (make (for Br (trM Bm) (link (sum * Am Br)) ) ) (prog (println "Shape mismatch") (bye)) (make (for Ar Am (link (make (for Br (trM Bm) (link (sum * Ar Br)) ) ) ) ) ) ) (ifn (rest) Rm (nM Rm (next)) ) ) ) ) (de sigm (X) (*/ 1.0 1.0 (+ 1.0 (exp (- X ) (de sigmd (X) (*/ X (- 1.0 X) 1.0)) (setq *Inputs '((0 0 1) (1 1 1) (1 0 1) (0 1 1))) (setq *Outputs (trM '((0 1.0 1.0 0 #(setq *Weights (make (do 3 (link (list (- (* 2 (excmd~randNum 0 1.0)) 1.0)) (setq *Weights '((-0.16595599) (0.444064899) (-0.99977125))) (do 1 (let (output (applyr 'sigm (mM *Inputs *Weights)) weightInc (applyr '((El1 El2) (*/ El1 El2 1.0)) (applyr '- *Outputs output) (applyr 'sigmd output)) ) (setq *Weights (applyr '+ *Weights (mM (trM *Inputs) weightInc))) ) ) (println (applyr 'sigm (mM '(1 0 0) *Weights))) (bye) -- And here is the Python solution: -- from numpy import exp, array, random, dot def sigm(x): return 1 / (1 + exp(-x)) def sigmd(x): return x * (1 - x) training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]]) training_set_outputs = array([[0, 1, 1, 0]]).T random.seed(1) synaptic_weights = 2 * random.random((3, 1)) - 1 for iteration in xrange(1): output = sigm(dot(training_set_inputs, synaptic_weights)) synaptic_weights += dot(training_set_inputs.T, (training_set_outputs - output) * sigmd(output)) print sigm(dot(array([1, 0, 0]), synaptic_weights)) -- A lot is going on behind the scenes in the Python solution, sigm(matrix) somehow gets translated to apply(sigm, matrix) behind the scenes, very fancy. Also m + m automatically understands that what to do because of opeator overloading or some such. So what do you think, is there a point in trying to take this AI / ML stuff further in PL or should I / we just give up? It feels like these type of things are just more suited for infix notation as opposed to prefix notation or is that just literally decades of doing math on paper with infix notation? One thing though, the PL code is more explicit, it's easier for me to understand what's going on whereas the Python stuff contains dizzying amounts of magic. Any thoughts?
Re: Math exp issue?
Ah, great resource, thanks, let's see if I can get all the way to the finish line on this one now... On Mon, Jan 29, 2018 at 8:25 AM, Alexander Burgerwrote: > On Mon, Jan 29, 2018 at 08:14:31AM +0100, Alexander Burger wrote: > > To the question why it returns 101 for (exp 1): 101 is the > > representation of the float number 1: > > > >: (round 101) > >-> "1.000" > > .. and (exp 1) is (exp 0.01) ... > > https://the-m6.net/blog/fixed-point-arithmetic-in-picolisp.html > > ♪♫ Alex > > -- > UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe >
Re: Math exp issue?
Original Message On 29 Jan 2018 12:55 pm, Alexander Burger wrote: On Mon, Jan 29, 2018 at 08:14:31AM +0100, Alexander Burger wrote: > To the question why it returns 101 for (exp 1): 101 is the > representation of the float number 1: > > : (round 101) > -> "1.000" .. and (exp 1) is (exp 0.01) ... Thanks! Now I understand. > https://the-m6.net/blog/fixed-point-arithmetic-in-picolisp.html Thanks for the resource. Reading it for the 2nd time after a long break (last i had checked it in 2017) I get it. >♪♫ Alex > -- > UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
Re: Math exp issue?
Hi since i basically study mathematics at the moment i will implement exp in pure picolisp when i find the time. 2018-01-29 8:25 GMT+01:00 Alexander Burger: > On Mon, Jan 29, 2018 at 08:14:31AM +0100, Alexander Burger wrote: > > To the question why it returns 101 for (exp 1): 101 is the > > representation of the float number 1: > > > >: (round 101) > >-> "1.000" > > .. and (exp 1) is (exp 0.01) ... > > https://the-m6.net/blog/fixed-point-arithmetic-in-picolisp.html > > ♪♫ Alex > > -- > UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe >
Re: Math exp issue?
On Mon, Jan 29, 2018 at 08:14:31AM +0100, Alexander Burger wrote: > To the question why it returns 101 for (exp 1): 101 is the > representation of the float number 1: > >: (round 101) >-> "1.000" .. and (exp 1) is (exp 0.01) ... https://the-m6.net/blog/fixed-point-arithmetic-in-picolisp.html ♪♫ Alex -- UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
Re: Math exp issue?
Hi PositronPro, oops, I did not read the whole thread, you already answered .. :) On Sun, Jan 28, 2018 at 11:40:49PM -0500, PositronPro wrote: > I can't answer why it returns 101 for (exp 1). > > You can try (exp 1.0) it returns 2718282. This works fine till scl is 8 or > less, anything more it simply returns T. To the question why it returns 101 for (exp 1): 101 is the representation of the float number 1: : (round 101) -> "1.000" ♪♫ Alex -- UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
Re: Math exp issue?
Hi Henrik, > Hi list, long time no see! Welcome back! :) > The definition of exp in math.l leads me to believe that we're calling this > C function: > https://www.tutorialspoint.com/c_standard_library/c_function_exp.htm Right. > 6) I would hope to get get something like 2718281 back from this call: > (println (exp 1)). > > But that is not happening, instead I get 101. You want to call : (exp 1.0) -> 2718282 ♪♫ Alex -- UNSUBSCRIBE: mailto:picolisp@software-lab.de?subject=Unsubscribe
Re: Math exp issue?
Okay, I did a bit of testing it happens when the value of *Scl is 1 or more. You get a similar result if you add or subtract, two or more numbers which are not in the same format; Like one without decimal and the other with a decimal. Make sure all the values are in decimal format with scl set properly. Most things should be OK. (load '@lib/math.l) (exp 2) -> 102 (exp 9) -> 109 (scl 1) (+ 1 1.0) -> 11 (- 1.0 1) -> -9 (+ 1.0 1.0) -> 20 Original Message On 29 Jan 2018 10:10 am, PositronPro wrote: > I can't answer why it returns 101 for (exp 1). > > You can try (exp 1.0) it returns 2718282. This works fine till scl is 8 or > less, anything more it simply returns T. > > (load '@lib/math.l) > (exp 1) > -> 101 > > (exp 1.0) > ->2718282 > > (scl 8) > (exp 1.0) > -> 2688...200 (50 digits number) > > (scl 9) # or any bigger value like 11 > -> T > Original Message > On 29 Jan 2018 3:49 am, Henrik Sarvell < hsarv...@gmail.com> wrote: > >> Hi list, long time no see! >> The definition of exp in math.l leads me to believe that we're calling this >> C function: >> https://www.tutorialspoint.com/c_standard_library/c_function_exp.htm >> >> Since we can't do floating numbers in PL and I notice that math.l uses (scl >> 6) I would hope to get get something like 2718281 back from this call: >> (println (exp 1)). >> But that is not happening, instead I get 101. >> Related is my current project which is converting this naive / simple neural >> network written in Python to PL: >> https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1 >> >> Where I'm currently stuck on the sigmoid function which in turn is making >> use of exp.
Re: Math exp issue?
I can't answer why it returns 101 for (exp 1). You can try (exp 1.0) it returns 2718282. This works fine till scl is 8 or less, anything more it simply returns T. (load '@lib/math.l) (exp 1) -> 101 (exp 1.0) ->2718282 (scl 8) (exp 1.0) -> 2688...200 (50 digits number) (scl 9) # or any bigger value like 11 -> T Original Message On 29 Jan 2018 3:49 am, Henrik Sarvell wrote: > Hi list, long time no see! > The definition of exp in math.l leads me to believe that we're calling this C > function: https://www.tutorialspoint.com/c_standard_library/c_function_exp.htm > > Since we can't do floating numbers in PL and I notice that math.l uses (scl > 6) I would hope to get get something like 2718281 back from this call: > (println (exp 1)). > But that is not happening, instead I get 101. > Related is my current project which is converting this naive / simple neural > network written in Python to PL: > https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1 > > Where I'm currently stuck on the sigmoid function which in turn is making use > of exp.