[julia-users] Interpolations.jl: irregular grids

2016-11-17 Thread Pieterjan Robbe
I am trying to do interpolation on irregular grids using Interpolations.jl. 
>From its documentation: 

Currently its support is best for B-splines and also supports irregular 
> grids.


 Anyone knows how I get the irregular grids to work? Basically, I have some 
triples (x,y,u) where x,y is a data point and u is a measurement. I'd like 
to find the value at (x*,y*).


[julia-users] Type with vector container parametrized in vector length

2016-08-05 Thread Pieterjan Robbe
According to 
http://docs.julialang.org/en/release-0.4/manual/performance-tips/#avoid-fields-with-abstract-containers,
 
this is the way to go for types with a container field:

type MyVector{T<:AbstractFloat,V<:AbstractVector}
v::V

MyVector(v::AbstractVector{T}) = new(v)
end
MyVector(v::AbstractVector) = MyVector{eltype(v),typeof(v)}(v)

This is indeed type-stable:

@code_warntype MyVector(0:0.1:1)

However, If I also want to parametrize my type in the vector length L like

type MySecondVector{L,T<:AbstractFloat,V<:AbstractVector}
v::V

MySecondVector(v::AbstractVector{T}) = new(v)
end
MySecondVector(v::AbstractVector) = 
MySecondVector{length(v),eltype(v),typeof(v)}(v)

This is no longer type-stable:

@code_warntype MySecondVector(0:0.1:1)

Can I enforce the length as a parameter so that this is known to the 
compiler or should I look into FixedSizeArrays instead?


[julia-users] type parametrized in vector length with vector container field

2016-08-05 Thread Pieterjan Robbe
Accord 
to 
http://docs.julialang.org/en/release-0.4/manual/performance-tips/#avoid-fields-with-abstract-containers,
 
this is the way to go for types with a container field:

type MyVector{T<:AbstractFloat,V<:AbstractVector}
v::V

MyVector(v::AbstractVector{T}) = new(v)
end
MyVector(v::AbstractVector) = MyVector{eltype(v),typeof(v)}(v)

This is indeed type-stable:

@code_warntype MyVector(0:0.1:1)

However, If I also want to parametrize my type in the vector length L like

type MySecondVector{L,T<:AbstractFloat,V<:AbstractVector}
v::V

MySecondVector(v::AbstractVector{T}) = new(v)
end
MySecondVector(v::AbstractVector) = 
MySecondVector{length(v),eltype(v),typeof(v)}(v)

This is no longer type-stable:

@code_warntype MySecondVector(0:0.1:1)

Can I enforce the length as a parameter so that this is known to the 
compiler or should I look into FixedSizeArrays instead?


[julia-users] Re: how to save array to text file "correctly"?

2016-06-24 Thread Pieterjan Robbe


f = open("myfile.csv","w")

for i in 1:length(data)

write(f,@sprintf("%20.16f\n",data[i]))

end

close(f)


shell> cat myfile.csv

 -0.5000

  0.

 -0.21819900

  0.15396700

 -0.17899000

  0.12671700

 -0.02243270

  0.01600870

Op vrijdag 24 juni 2016 04:55:37 UTC+2 schreef Hoang-Ngan Nguyen:
>
> Hi,
>
> I have the following array
> data = [
>  -0.5 
>  0.0 
>  -2.18199e-5
>  1.53967e-5
>  -1.7899e-5 
>  1.26717e-5
>  -2.24327e-6
>  1.60087e-6]
>
>
> When I save it using either 
>
> writecsv("filename.csv",data)
>
> or
>
> writedlm("filename.csv",data,",")
>
>
> I get this
> -.5
> 0
> -21819881018654233e-21
> 153966589305464e-19
> -17898976869144106e-21
> 12671715235247999e-21
> -22432716786997375e-22
> 16008706220269127e-22
>
> Is there anyway for me to, instead, get the following:
> -.5
> 0
> -.21819881018654233
> .153966589305464
> -.17898976869144106
> .12671715235247999
> -.022432716786997375
> .016008706220269127
>
> Thanks,
> Ngan
>
>

[julia-users] Re: automatic export of all enum values

2016-04-26 Thread Pieterjan Robbe
nice solution, thanks!

Op dinsdag 26 april 2016 18:13:22 UTC+2 schreef Steven G. Johnson:
>
>
>
> On Tuesday, April 26, 2016 at 11:07:45 AM UTC-4, Pieterjan Robbe wrote:
>>
>> Is it possible to export all values of an an enum defined inside a module?
>> That is, without rewriting all values after 'export'.
>>
>
> for s in instances(WindDirection)
>
> @eval export $(symbol(s))
> end 
>


[julia-users] automatic export of all enum values

2016-04-26 Thread Pieterjan Robbe
Is it possible to export all values of an an enum defined inside a module?
That is, without rewriting all values after 'export'.

julia> module Compass

export WindDirection

@enum WindDirection north east south west

end


julia> using Compass


julia> WindDirection

Compass.WindDirection


julia> north

ERROR: UndefVarError: north not defined






[julia-users] Re: Examples of integrating Fortran code in Julia

2016-01-18 Thread Pieterjan Robbe
I think you need to specify the path that points to the module, not just the 
module name.

[julia-users] Re: @everywhere and memory allocation

2015-12-01 Thread Pieterjan Robbe
No, not really. SharedArrays do only support bitstypes. I have an application 
where the function to be executed in parallel depends on some fixed data (some 
constants, a dict, some arrays, etc.) I would like to replace my @parallel for 
loop by @everywhere (because of the reuse of my cholesky factorization), but 
I'm worried that this will explode my memory usage.

[julia-users] Re: Cholmod Factor re-use

2015-12-01 Thread Pieterjan Robbe
See this <https://github.com/JuliaLang/julia/issues/14155> issue.

Op dinsdag 1 december 2015 13:08:12 UTC+1 schreef Matthew Pearce:
>
> Thanks, but I'm afraid not. I guess the inclusion of Cholmod into Julia 
> must have been restructured since that part of the code was written.
> E.g.:
>
> ```
> julia> using Base.LinAlg.CHOLMOD.CholmodFactor
> ERROR: UndefVarError: CHOLMOD not defined
> ```
>
> Matthew
>
> On Tuesday, November 24, 2015 at 3:43:21 PM UTC, Pieterjan Robbe wrote:
>>
>> is this of any help?
>>
>> https://groups.google.com/forum/#!msg/julia-users/tgO3hd238Ac/olgfSJLXvzoJ
>>
>

[julia-users] @everywhere and memory allocation

2015-12-01 Thread Pieterjan Robbe
does the @everywhere macro allocate extra memory to make local copies of a 
matrix for every processor?

A = sprandn(1,1,0.7)
@time A = sprandn(1,1,0.7)

gives 2.422259 seconds (23 allocations: 1.565 GB, 3.77% gc time)

@everywhere A = sprandn(1,1,0.7)
@time @everywhere A = sprandn(1,1,0.7)

gives 16.495639 seconds (1.31 k allocations: 1.565 GB, 6.14% gc time).

However, I know that there are local copies of the matrix on each processor:

@everywhere println(A[1,1])

-1.2751101862102039

>From worker 5: 0.0

>From worker 4: 0.0

>From worker 2: 0.853669869355948

>From worker 3: 0.0

Is there a way to use the @everywhere macro without allocating extra 
memory? Suppose A was created using

A = speye(1,1,0.7)

is there also a copy of the matrix A for all of the workers?


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Oh genius ;) why didn't I come up with that myself? Thanks a lot!

[julia-users] Metaprogramming and function scope

2015-11-24 Thread Pieterjan Robbe
Why can't I parse a function evaluation of a function defined within the 
scope of another function?
i.e., the following terminates with an UndefVarError: bar not defined:

*function* foo()
  *function* bar()
x
  end
  *return* eval(parse("bar()"))
end

x = 7
foo()

However, I can do

*function* foo()
  *function* bar()
x
  end
  *return* bar()
end

x = 7
foo()

and

*function* foo()
  *return* eval(parse("bar()"))
end

*function* bar()
  x
end

x = 7
foo()

Many thanks!
-Pieterjan


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Oops looks like I was timing the wrong thing :) Sorry!
My results are the similar now (on a 2014 Mac, Julia 0.4): 
time = 0.5201348707

time = 0.0634930367
Thanks a lot!


Op dinsdag 24 november 2015 19:20:15 UTC+1 schreef Steven G. Johnson:
>
> (Note that I'm using Julia 0.4 on a 2012 Mac.)
>


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Matlab was running multithreaded, the single-threaded version (LASTN = 
maxNumCompThreads(1)) gives time = 0.3449, still better. The Julia-version 
where I store the matrices in an Array now gives time = 0.4078, better than 
the previous Julia-version but still not what I would expect.


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Thanks for the fast response, that explains a lot. I don't see the 7x 
speedup on my machine though, but probably the problem is too small to see 
significant differences.


Re: [julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Of course :) I also increased the number of experiments (100 instead of 10) 
and discarded the first entry of the result. When using Steven's function, 
the results are more or less comparable.

   mean  min max
Matlab0.39550.34700.4978
Julia   0.46690.31090.5757

I'm happy with the results now.



Re: [julia-users] Metaprogramming and function scope

2015-11-24 Thread Pieterjan Robbe
That makes sense :) Is there a workaround? I need to define some (global 
constant) variables, Z1, Z2, Z3 etc. (that's where the parsing comes from) by 
calling a function (bar) that does something with data defined in foo(). I'd 
like to keep this inside a single function, since it's the initialization step 
of a more complex simulation.

[julia-users] Re: Cholmod Factor re-use

2015-11-24 Thread Pieterjan Robbe
is this of any help?

https://groups.google.com/forum/#!msg/julia-users/tgO3hd238Ac/olgfSJLXvzoJ


[julia-users] Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Pieterjan Robbe
Consider the problem of taking a linear combination of m (n x n)-matrices 
stored in a (n x n x m)-array A. The weights are stored in a length-m vector 
w. 
In Matlab, we can accomplish this by

n = 100;

m = 1;


A = rand(n,n,m);

y = randn(1,m);


times = zeros(1,10);


for cntr = 1:10

   tic;

   Z = sum(repmat(reshape(y,[1 1 m]),[n n 1]).*A,3);

   times(cntr) = toc;

end


sprintf('time = %f',mean(times))

This gives time = 0.3259 on my machine using R2014b. (Note: I am aware of 
the ridiculous amount of memory allocated by this solution, however, it 
turns out to be 25% faster than using bsxfun, for instance)

A Julia-implementation that accomplishes the same thing is

function matrixLinComb(A::Array{Float64,3},y::Array{Float64,2})
  Z = reshape(sum(reshape(y,1,1,m).*A,3),n,n)
end

n = 100
m = 1

A = rand(n,n,m)
y = randn(1,m)

times = zeros(10)

for cntr = 1:10
  times[cntr] = @elapsed matrixLinComb(A,y)
end
println("time = $(mean(times))")

This gives time = 0.5130 using v0.4.0 . I've tried out different other 
implementations (broadcast, reshaping A and *multiply etc.), though this 
turned out the be the fastest method. Does anyone has an idea why Matlab's 
repmat function is superior to the Julia-implementation (or, as 
my colleagues call it - Julia doesn't work)? Thanks!