While I don't think it's true that numerical computing is only 5% matrix 
math, most user facing code isn't matrix math.

At its core, just about every numerical algorithm is matrix math. Every 
nonlinearity or connection between terms becomes a matrix, and so every 
equation is either solving Ax or A\b. Even solving nonlinear equations 
becomes function calls to build matrices which we then solve with Ax or A\b 
(think of Newton/Broyden methods). Then, even A\b is about solving Ax in 
every iterative solver. Higher dimensional problems use tensor operators 
which normally generalize matrix multiplication, and so you can usually 
more compactly write them as matrix multiplication along different 
dimensions (and solve it efficiently via BLAS). So at its core, building 
all of these numerical libraries is all about repeatedly solving Ax, and so 
it may be the most used operation (along with +b).

However, outside of methods development, normally you aren't using matrix 
functions and are rather defining things element-wise. Thus all of the 
.'ing has been been passed off to the user, and so you have to define a 
nasty things like f(x,y,z)= x.^(y.*z).*x. However, I think this change will 
actually eliminate this issue in many cases, because a user will just pass 
a function f defined on scalars, and then within the algorithm will 
essentially just use f., and with the ease of this kind of broadcasting, I 
think many algorithms will "upgrade" to not require vectorized inputs 
anymore.

This syntax is a very Julia idea, but the main issue will be learning to 
use this feature. This is very Julia because it is simple but explicit 
syntax. Julia thrives on the fact that it adds slight bits of explicitness 
to scripting languages which it uses for compiler optimization (this is 
essentially what multiple dispatch is doing). Developers exploit this 
explicitness by being able to parse through code and read it almost as 
psudocode, and re-write your code to do the operations you want, but in a 
much more optimized way (see ParallelAccelerator.jl and its current 
limitations as an example of something that can really use this 
explicitness of vectorization in user defined functions). 

But with all of the optimization goodness this can provide, it will likely 
turn away users who will expect sin([4;5;6]) to work without adding the dot 
the first time they use the function. More effort does need to be put into 
new user issues because in my opinion there is a barrier new users have to 
climb before they see all of the advantages, and this is just the newest 
iteration of the problem.


On Tuesday, May 24, 2016 at 4:32:59 PM UTC-7, Siyi Deng wrote:
>
> I tend to agree that explicit broadcasting is better than numpy's default 
> behavior.
>
> However, IMO operations on the n-d arrays is better defaulted to 
> element-wise, and n-d with scalar default to element-wise too.
>
> Think about it, a lot of operations are not even commonly defined for 3-d 
> and above, so why wasting the most straightforward (a+b, a*b, a/b) syntaxes 
> on special-case matrix operations?  
>
> In numerical computing, matrix math is only like 5% of the times and the 
> other 95% are element-wise or tensor-like.
>
>

Reply via email to