The main defects of both R and Python are a lack of a typing system and high 
performance compilation.  I find R still follows (is used by) the statistics 
research community more than Python.   Common Lisp was always better than 
either.

Sent from my iPhone

On Jan 8, 2023, at 11:03 AM, Russ Abbott <[email protected]> wrote:


As indicated in my original reply, my interest in this project grows from my 
relative ignorance of Deep Learning. My career has focussed exclusively on 
symbolic computing. I've worked with and taught (a) functional programming, 
logic programming, and related issues in advanced Python; (b) complex systems, 
agent-based modeling, genetic algorithms, and related evolutionary processes, 
(c) a bit of constraint programming, especially in MiniZinc, and (d) 
reinforcement learning as Q-learning, which is reinforcement learning without 
neural nets. I've always avoided neural nets--and more generally numerical 
programming of any sort.

Deep learning has produced so many impressive results that I've decided to 
devote much of my retirement life to learning about it. I retired at the end of 
Spring 2022 and (after a break) am now devoting much of my time to learning 
more about Deep Neural Nets. So far, I've dipped my brain into it at various 
points. I think I've learned a fair amount. For example,

  *   I now know how to build a neural net (NN) that adds two numbers using a 
single layer with a single neuron. It's really quite simple and is, I think, a 
beautiful example of how NNs work. If I were to teach an intro to NNs I'd start 
with this.
  *   I've gone through the Kaggle Deep Learning sequence mentioned earlier.
  *   I found a paper that shows how you can approximate any differentiable 
function to any degree of accuracy with a single-layer NN. (This is a very nice 
result, although I believe it's not used explicitly in building serious Deep NN 
systems.)
  *   From what I've seen so far, most serious DNNs are built using Keras 
rather than PyTorch.
  *   I've looked at Jeremy Howard's fast.ai<http://fast.ai> material. I was 
going to go through the course but stopped when I found that it uses PyTorch. 
Also, it seems to be built on fast.ai<http://fast.ai> libraries that do a lot 
of the work for you without explanation.  And it seems to focus almost 
exclusively on Convolutional NNs.
  *   My impression of DNNs is that to a great extent they are ad hoc. There is 
no good way to determine the best architecture to use for a given problem. By 
architecture, I mean the number of layers, the number of neurons in each layer, 
the types of layers, the activation functions to use, etc.
  *   All DNNs that I've seen use Python as code glue rather than R or some 
other language. I like Python--so I'm pleased with that.
  *   To build serious NNs one should learn the Python libraries Numpy (array 
manipulation) and Pandas (data processing). Numpy especially seems to be used 
for virtually all DNNs that I've seen.
  *   Keras and probably PyTorch include a number of special-purpose neurons 
and layers that can be included in one's DNN. These include: a DropOut layer, 
LSTM (short-long-term memory) neurons, convolutional layers, recurrent neural 
net layers (RNN), and more recently transformers, which get credit for ChatGPT 
and related programs. My impression is that these special-purpose layers are ad 
hoc in the same sense that functions or libraries that one finds useful in a 
programming language are ad hoc. They have been very important for the success 
of DNNs, but they came into existence because people invented them in the same 
way that people invented useful functions and libraries.
  *   NN libraries also include a menagerie of activation functions. An 
activation function acts as the final control on the output of a layer. 
Different activation functions are used for different purposes. To be 
successful in building a DNN, one must understand what those activation 
functions do for you and which ones to use.
  *   I'm especially interested in DNNs that use reinforcement learning. That's 
because the first DNN work that impressed me was DeepMind's DNNs that learned 
to play Atari games--and then Go, etc. An important advantage of Reinforcement 
Learning (RL) is that it doesn't depend on mountains of labeled data.
  *   I find RL systems more interesting than image recognition systems. One of 
the striking features of many image recognition systems is that they can be 
thrown off by changing a small number of pixels in an image. The changed image 
would look to a human observer just like the original, but it might fool a 
trained NN into labeling the image as a banana rather than, say, an automobile, 
which is what it really is. To address this problem people have developed 
Generative Adversarial Networks (GANs) which attempt to find such weaknesses in 
a neural net during training and then to train the NN not to have those 
weaknesses. This is a fascinating result, but as far as I can tell, it mainly 
shows how fragile some NNs are and doesn't add much conceptual depth to one's 
understanding of how NNs work.

I'm impressed with this list of things I sort of know. If you had asked me 
before I started writing this email I wouldn't have thought I had learned as 
much as I have. Even so, I feel like I don't understand much of it beyond a 
superficial level.

So far I've done all my exploration using Google's Colab (Google's Python 
notebook implementation) and Kaggle's similar Python notebook implementation. 
(I prefer Colab to Kaggle.) Using either one, it's super nice not to have to 
download and install anything!

I'm continuing my journey to learn more about DNNs. I'd be happy to have 
company and to help develop materials to teach about DNNs. (Developing teaching 
materials always helps me learn the subject being covered.)

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Sun, Jan 8, 2023 at 1:48 AM glen 
<[email protected]<mailto:[email protected]>> wrote:
Yes, the money/expertise bar is still pretty high. But TANSTAAFL still applies. 
And the overwhelming evidence is coming in that specific models do better than 
those trained up on diverse data sets, "better" meaning less prone to subtle 
bullsh¡t. What I find fascinating is tools like OpenAI *facilitate* 
trespassing. We have a wonderful bloom of non-experts claiming they understand 
things like "deep learning". But do they? An old internet meme is brought to 
mind: "Do you even Linear Algebra, bro?" >8^D

On 1/8/23 01:06, Jochen Fromm wrote:
> I have finished a number of Coursera courses recently, including "Deep 
> Learning & Neural Networks with Keras" which was ok but not great. The 
> problems with deep learning are
>
> * to achieve impressive results like chatGPT from OpenAi or LaMDA from Goggle 
> you need to spend millions on hardware
> * only big organisations can afford to create such expensive models
> * the resulting network is s black box and it is unclear why it works the way 
> it does
>
> In the end it is just the same old back propagation that has been known for 
> decades, just on more computers and trained on more data. Peter Norvig calls 
> it "The unreasonable effectiveness of data"
> https://research.google.com/pubs/archive/35179.pdf
>
> -J.
>
>
> -------- Original message --------
> From: Russ Abbott <[email protected]<mailto:[email protected]>>
> Date: 1/8/23 12:20 AM (GMT+01:00)
> To: The Friday Morning Applied Complexity Coffee Group 
> <[email protected]<mailto:[email protected]>>
> Subject: Re: [FRIAM] Deep learning training material
>
> Hi Pieter,
>
> A few comments.
>
>   * Much of the actual deep learning material looks like it came from the 
> Kaggle "Deep Learning <https://www.kaggle.com/learn/intro-to-deep-learning>" 
> sequence.
>   * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to Python.
>   * More importantly, I would put the How-to-use-Python stuff into a 
> preliminary class. Assume your audience knows how to use Python and focus on 
> Deep Learning. Given that, there is only a minimal amount of information 
> about Deep Learning in the write-up. If I were to attend the workshop and 
> thought I would be learning about Deep Learning, I would be disappointed--at 
> least with what's covered in the write-up.
>
>     I say this because I've been looking for a good intro to Deep Learning. 
> Even though I taught Computer Science for many years, and am now retired, I 
> avoided Deep Learning because it was so non-symbolic. My focus has always 
> been on symbolic computing. But Deep Learning has produced so many 
> extraordinarily impressive results, I decided I should learn more about it. I 
> haven't found any really good material. If you are interested, I'd be more 
> than happy to work with you on developing some introductory Deep Learning 
> material.
>
> -- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp 
> <[email protected]<mailto:[email protected]> 
> <mailto:[email protected]<mailto:[email protected]>>> wrote:
>
>     Thanks to the kind support of OpenAI's chatGPT, I am in the process of 
> gathering materials for a comprehensive and hands-on deep learning workshop. 
> Although it is still a work in progress, I welcome any interested parties to 
> take a look and provide their valuable input. Thank you!
>
>     You can get it from:
>     
> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
>  
> <https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0>
>

--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to