[julia-users] Re: Advice on vectorizing functions

2015-04-14 Thread Patrick O'Leary
On Tuesday, April 14, 2015 at 1:38:21 PM UTC-5, SixString wrote:
>
> Eliding the types completely results in warnings about method ambiguity.
>

Yes, of course--good catch. 


[julia-users] Re: Advice on vectorizing functions

2015-04-14 Thread SixString
Thanks Lex and Patrick for the help on invariance of arrays.  Eliding the 
types completely results in warnings about method ambiguity.  For the 
benefit of other Julia newcomers, here are the functions that seem to work 
for me:

function ldexp!{T<:FloatingPoint}(x::StridedArray{T}, e::Int)
for i=1:length(x)
x[i] = ldexp(x[i], e)
end
x
end

function Base.Math.ldexp{T<:FloatingPoint}(x::StridedArray{T}, e::Int)
res = similar(x)
for i=1:length(x)
res[i] = ldexp(x[i], e)
end
res
end

I changed Array to StridedArray to support subarrays too.  The second 
function involves defining a new method for one that already exists in the 
base, without having to do an import.




[julia-users] Re: Advice on vectorizing functions

2015-04-14 Thread Patrick O'Leary


On Monday, April 13, 2015 at 10:38:28 PM UTC-5, ele...@gmail.com wrote:
>
> Why does this version result in complaints about no matching method for 
> (::Array(Float64,1), ::Int64)?  super(Float64) is FloatingPoint, and 
> ldexp() has methods for all subtypes of FloatingPoint paired with Int.
>
> But Float64 <: FloatingPoint does not imply Array{Float64} <: 
> Array{FloatingPoint}, see 
> http://en.wikipedia.org/wiki/Covariance_and_contravariance_%28computer_science%29#Arrays
>  
> for why.
>
> Cheers
> Lex
>

Reference in the Julia manual: 
http://julia.readthedocs.org/en/latest/manual/types/#parametric-composite-types

I'm starting on a quixotic quest to follow up on every mention of 
invariance in method signatures to show how to deal with it, because on its 
face it sounds limiting--"you have to duplicate all that code? This isn't 
generic at all! I'm never going to use Julia again!"

Never fear.

function ldexp!{T<:FloatingPoint}(x:Array{T}, e::Int)
...
end

This is a generic parametric method that handles the entire family of types 
which are Arrays with elements which are subtypes of FloatingPoint.

Note that you can probably elide the types completely, here--it probably 
suffices for your application to define:

function ldexp!(x, e)
...
end

And let Julia handle the type specialization for you.


[julia-users] Re: Advice on vectorizing functions

2015-04-13 Thread elextr


On Tuesday, April 14, 2015 at 9:40:49 AM UTC+10, SixString wrote:
>
> For code outside of my inner loops, I still prefer vectorized functions 
> for compactness and readability.
>
> This works:
>
> function ldexp!(x::Array{Float64}, e::Int)
> for i=1:length(x)
> x[i] = ldexp(x[i], e)
> end
> x
> end
>
> Why does this version result in complaints about no matching method for 
> (::Array(Float64,1), ::Int64)?  super(Float64) is FloatingPoint, and 
> ldexp() has methods for all subtypes of FloatingPoint paired with Int.
>

But Float64 <: FloatingPoint does not imply Array{Float64} <: 
Array{FloatingPoint}, 
see 
http://en.wikipedia.org/wiki/Covariance_and_contravariance_%28computer_science%29#Arrays
 
for why.

Cheers
Lex
 

>
> function ldexp!(x::Array{FloatingPoint}, e::Int)
> for i=1:length(x)
> x[i] = ldexp(x[i], e)
> end
> x
> end
>
>

[julia-users] Re: Advice on vectorizing functions

2015-04-13 Thread SixString
For code outside of my inner loops, I still prefer vectorized functions for 
compactness and readability.

This works:

function ldexp!(x::Array{Float64}, e::Int)
for i=1:length(x)
x[i] = ldexp(x[i], e)
end
x
end

Why does this version result in complaints about no matching method for 
(::Array(Float64,1), ::Int64)?  super(Float64) is FloatingPoint, and 
ldexp() has methods for all subtypes of FloatingPoint paired with Int.

function ldexp!(x::Array{FloatingPoint}, e::Int)
for i=1:length(x)
x[i] = ldexp(x[i], e)
end
x
end



[julia-users] Re: Advice on vectorizing functions

2015-03-28 Thread SixString
I found a way to add new vectorized methods with the same name as the base 
(scaler) methods:

`import Base.Math.ldexp'
`ldexp(x::Array{Float64,1}, e::Int) = [ldexp(x[i], e) for i=1:length(x)]`
`ldexp(x::Array{Float64,2}, e::Int) = [ldexp(x[i,j], e) for i=1:size(x,1), 
j=1:size(x,2)]`

@Steven G. Johnson: your point is well taken.  In Python, I have already 
seen a big speedup from this library compared to vectorized Numpy:
https://github.com/pydata/numexpr
I have already seen the similar concept in Julia:
https://github.com/lindahua/Devectorize.jl

I will consider all options when trying to speed up the critical parts of 
my code.  When vectorized functions prove optimum, I could still use advice 
about readability and minimizing code rewrite for future Julia versions.


[julia-users] Re: Advice on vectorizing functions

2015-03-28 Thread Steven G. Johnson
In general, you need to unlearn the intuition from Matlab/Python that 
vectorized/built-in functions are fast, and functions or loops you write 
yourself are slow.  There's not the same drive to vectorize everything in 
Julia because not only are your own loops fast, but writing your own loops 
(or using something equivalent like a comprehension) is often much faster 
than "vectorized" code — especially in the common case where you are doing 
several vectorized operations in sequence.

For example, see problem 3(c) from the following notebook (solutions to a 
recent homework assignment in my class):
  
http://nbviewer.ipython.org/url/math.mit.edu/~stevenj/18.335/pset3sol-s15.ipynb
showing a 15x speedup from writing your own loops to compute A+3B+4A.^2 vs. 
the vectorized expression.

Vectorization is certainly extremely convenient for linear algebra (and for 
matrix-matrix operations can still lead to big speedups over your own code) 
and basic arithmetic operations on arrays.   But there isn't the same drive 
to vectorize *everything* in a language like Julia.