thanks (how does someone working on embedded c++ get to work with 
adaboost?!)

what i am actually going with is a bunch of links to examples (i can email 
this to everyone before the talk, which will be over google meetup):

(if anyone has corrections to what follows in the next few hours i am happy 
to hear them, although i think it's pretty uncontroversial)


* Data analysis and plots something like R
  http://dcjones.github.io/Gadfly.jl/

* IJulia reminds me of Mathematica (based on IPython)
  
https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks#julia
  
http://nbviewer.ipython.org/url/beowulf.csail.mit.edu/18.337/fractals.ipynb
  
http://nbviewer.ipython.org/github/JuliaOpt/juliaopt-notebooks/blob/master/notebooks/Matrix%20Completion%20with%20Binary%20Data.ipynb

* Like Matlab, it makes using arrays easy
  http://quant-econ.net/jl/julia_arrays.html

* The neat thing (to me) is that unlike Matlab, the arrays code is often
  (not always - it calls out to BLAS etc) written in Julia itself.
  http://julialang.org/benchmarks/

* More generally, it has managed ("automatic") memory allocation.
  It doesn't have objects/classes, but it has something similar:
  it combines types (like C structs) with multiple dispatch.

  For example, to define complex types:

  type Complex64<:Number
      Float64 x
      Float64 y
  end

  and then define

  +(a::Complex64, b::Complex64) = Complex64(a.x + b.x, a.y + b.y)
  ... etc

  Note that + is already defined for a pile of other types (start julia and
  type "methods(+)" to see them all).

  Which is almost the same as defining a Complex64 class with a "+" method.
  Main differences are (1) all types used to choose method and (2) only 
final
  types have fields (so memory layout is known).

  And it's fast because the compiler compiles functions at runtime depending
  on types when called.  So the code ends up being compiled for a very
  specific type, even if in your code you just had "a+b" and it wasn't clear
  whether a and b were Complex64 or Float64 or Int32 or ...

  Downside to that is that when you first run a program it is actually slow,
  as it compiles things.  But second and further calls to any routine are
  fast.

  I've written CRC32 (checksum) code of comparable speed to libz (pretty 
much
  the C benchmark).  It wasn't "simple", but it was no harder than C.  You
  have profiling tools, you unroll loops, etc etc.

  (In practice you would probably do:

  type Complex(F<:Float}<:Number
      F x
      F y
  end

  because it hs parameterised types)


On Thursday, 10 September 2015 05:21:42 UTC-3, Carlos Becker wrote:
>
> Hi Andrew, my slides are here, 
> https://sites.google.com/site/carlosbecker/a-few-notes , they are for 
> v0.3: 
>
> If you need the openoffice original let me know, I can send it to you.
> Cheers.
>
> El miércoles, 9 de septiembre de 2015, 14:07:36 (UTC+2), andrew cooke 
> escribió:
>>
>> ok, thanks everyone i'll have a look at all those.  andrew
>>
>> On Tuesday, 8 September 2015 17:58:33 UTC-3, andrew cooke wrote:
>>>
>>>
>>> I need to give a presentation at work and was wondering is slides 
>>> already exist that:
>>>
>>>   * show how fast it is in benchmarks
>>>
>>>   * show that it's similar to matlab (matrix stuff)
>>>
>>>   * show that you can write fast inner loops
>>>
>>>  For bonus points:
>>>
>>>   * show how you can add other numerical types at no "cost"
>>>
>>>   * show how mutiple dispatch can be useful
>>>
>>>   * show how someone used to OO in, say, python, won't feel too lost
>>>
>>> Preferably just one slide per point.  Very short.
>>>
>>> Thanks,
>>> Andrew
>>>
>>>

Reply via email to