On Monday, October 19, 2015 at 2:31:16 PM UTC-4, lawrence dworsky wrote: > > This last point, I think, is very important. I'm a Julia newbe so I'm just > reading all the stuff about the interaction between variable typing and > execution speed. I've gotta tell you, having a language where I don't worry > about this at all and then the compiler handles any optimization issues has > a lot to say for it. Why is this "craziness?" >
An untyped variable in Julia is a completely different beast from an untyped variable in Fortran 77 style with implicit typing. If I write f(x) = x+1 in Julia, the code is totally *polymorphic* — the function f(x) works for any x that supports addition (+) with an Int (1), including bigints, arrays, all the different floating-point types, complex numbers, external-package types like quaternions, etc. Furthermore, f(x) is fast for all of those types, because the Julia compiler specializes f into a different compiled version for each argument type as needed. If you use an implicitly typed variable "x" in Fortran (without "implicit none"), then it is *not* polymorphic. x has one and only one type (real, i.e. floating-point, either single or double precision depending on a compiler switch), and your code only works with that type. The advantage of the polymorphism and dynamism in Julia is that code is much more re-usable — we can implement a function like sum(x) *once* and it will work with any type. (This is a generalization of the polymorphism that e.g. object-oriented programming or C++ templates give you.) The price you pay for this is that if you want it to be fast, you need to think a little bit more about types than you would with code that worked with one and only one type. (If you want code that works with one and only one type in Julia, of course you can do it, just by explicit type declarations. But it is more powerful to get used to thinking about dynamic polymorphism, and learning to follow the simple rules that will make such code fast.)
