[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Viral Shah
The matrix-vector multiply in there will lose the benefit of BLAS in 
devectorization. This is one area where we ought to be better, since this 
code is best not devectorized (from a user's perspective).

On my mac, python is .27 seconds and julia 0.4 is .47 seconds. Python is 
perhaps not using a fast BLAS, since it is whatever came with pip.

-viral

On Thursday, January 21, 2016 at 4:22:52 PM UTC+5:30, Kristoffer Carlsson 
wrote:
>
> There is no need to annotate your function argument types so tightly, 
> unless you have a good reason for it.
>
> You will generate a lot of temporaries in your V = ...
>
> Rewrite it as a loop and it will be a lot faster. You could also take a 
> look at the Devectorize.jl package.
>
>

[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Dupont
No gain was noticed.

>

[julia-users] Ambiguous methods warnings for DataFrames and Images

2016-01-21 Thread cormullion
Just wondering if there's a solution in the future for this trivial but mildly 
irritating problem:

julia> using DataFrames, Images
WARNING: New definition
.+(Images.AbstractImageDirect, AbstractArray) at 
/Users/me/.julia/v0.4/Images/src/algorithms.jl:22
is ambiguous with:
.+(AbstractArray, Union{DataArrays.PooledDataArray, 
DataArrays.DataArray}, AbstractArray...) at 
/Users/me/.julia/v0.4/DataArrays/src/broadcast.jl:297.
To fix, define
.+(Images.AbstractImageDirect, Union{DataArrays.PooledDataArray, 
DataArrays.DataArray})
before the new definition.

and so on for 150 lines. It's not a major problem, of course (I just ignore 
it). But I'm curious as to what the fix will be. 

Re: [julia-users] Ambiguous methods warnings for DataFrames and Images

2016-01-21 Thread Mauro
It's a known wart: https://github.com/JuliaLang/julia/issues/6190

But as far as I recall that issue thread, no solution has been found yet.

On Thu, 2016-01-21 at 13:32, cormull...@mac.com wrote:
> Just wondering if there's a solution in the future for this trivial but 
> mildly irritating problem:
>
> julia> using DataFrames, Images
> WARNING: New definition
> .+(Images.AbstractImageDirect, AbstractArray) at 
> /Users/me/.julia/v0.4/Images/src/algorithms.jl:22
> is ambiguous with:
> .+(AbstractArray, Union{DataArrays.PooledDataArray, 
> DataArrays.DataArray}, AbstractArray...) at 
> /Users/me/.julia/v0.4/DataArrays/src/broadcast.jl:297.
> To fix, define
> .+(Images.AbstractImageDirect, Union{DataArrays.PooledDataArray, 
> DataArrays.DataArray})
> before the new definition.
>
> and so on for 150 lines. It's not a major problem, of course (I just ignore 
> it). But I'm curious as to what the fix will be.


Re: [julia-users] Euler - Maruyama slower than python

2016-01-21 Thread Yichao Yu
On Thu, Jan 21, 2016 at 3:33 AM, Dupont  wrote:
> Hi,
>
> I am trying to implement Euler-Maruyama method in Julia for a neural network
> and it is 4x slower than python.
> Can someone give me a hint to fill that gap?
>
> Thank you for your help,
>
> Best regards,
>
> ### Python code
> import numpy as np
> import time
>
> def sig(x):
> return 1.0/ (1. + np.exp(-x))
>
> def mf_loop(Ndt,V0,V,dt,W,J):
> sv = np.copy(V0)
> V = np.copy(V0)
>
> for i in range(Ndt):
> sv = 1.0/ (1. + np.exp(-V))
> V = V + ( -V+np.dot(J,sv) ) * dt + np.sqrt(dt) * W[:,i]
>
> N = 100
> # timestep
> Ndt = 1
>
> sigma  = 0.1
> W = (np.random.randn(N,Ndt)) * sigma
> J = (np.random.rand(N,N))/N
>
> V0 = np.random.rand(N)
> V = np.copy(V0)
>
> ## JULIA CODE ##
> import Base.LinAlg, Base.LinAlg.BlasReal, Base.LinAlg.BlasComplex
>
> function sig(x::Union{Float64,Vector})
>   return 1.0 ./ (1. + exp(-x))
> end
>
> function
> mf_loop(Ndt::Int64,V0::Vector{Float64},V::Vector{Float64},dt::Float64,W::Matrix{Float64},J::Matrix{Float64})

Remove the type constraints, they are useless and Int64 is wrong.

>   sv = copy(V0)
>   V  = copy(V0)
>
>   for i=1:Ndt
> sv = sig(V)
> V = V + ( -V + J*sv ) * dt + W[:,i]
> # BLAS.gemv!('N',dt,J,sv,1.0-dt,V)
> # BLAS.axpy!(1., W[:,i], V)
>   end

This won't be faster than python since both of them are calling the
underlying c library and are allocating a lot of arrays in each loop.
Even in the version you commented out, sig(V) and W[:, i] are still allocating.

>   nothing
> end
>
> N = 100
> dt = 0.1
> Ndt = 1
>
> sigma  = 0.1
> W = (randn(N,Ndt)) * sigma * sqrt(dt)
> J = (rand(N,N))/N
>
> V0 = rand(N)
> V = copy(V0)
>
> mf_loop(1,V0,V,dt,W,J)
> @time mf_loop(Ndt,V0,V,dt,W,J)


[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Simon Byrne
I noticed the commented out BLAS.gemm! and BLAS.axpy! lines: did these help?

-Simon

On Thursday, 21 January 2016 11:12:53 UTC, Viral Shah wrote:
>
> The matrix-vector multiply in there will lose the benefit of BLAS in 
> devectorization. This is one area where we ought to be better, since this 
> code is best not devectorized (from a user's perspective).
>
> On my mac, python is .27 seconds and julia 0.4 is .47 seconds. Python is 
> perhaps not using a fast BLAS, since it is whatever came with pip.
>
> -viral
>
> On Thursday, January 21, 2016 at 4:22:52 PM UTC+5:30, Kristoffer Carlsson 
> wrote:
>>
>> There is no need to annotate your function argument types so tightly, 
>> unless you have a good reason for it.
>>
>> You will generate a lot of temporaries in your V = ...
>>
>> Rewrite it as a loop and it will be a lot faster. You could also take a 
>> look at the Devectorize.jl package.
>>
>>

[julia-users] Re: immutability, value types and reference types?

2016-01-21 Thread FANG Colin
My guess purely:  I thought if a immutable type is big enough, it was 
implemented as a reference type. (Otherwise each time when it is passed to 
a function, a lot of memory copy is performed).

immutable Big
a1::Int
a2::Int
...
a1000::Int
end



On Thursday, January 21, 2016 at 3:55:49 AM UTC, asy...@gmail.com wrote:
>
> Julia's immutable types are value types and mutable types are reference 
> types. In the Julia docs on types there is a paragraph that begins:
>
>  "It is instructive, particularly for readers whose background is C/C++, 
> to consider why these two properties go hand in hand. [...]" 
>
> However this paragraph only explains why value types should be immutable, 
> it doesn't describe why you can't define a reference type that is 
> immutable. So, why not? 
>


Re: [julia-users] immutability, value types and reference types?

2016-01-21 Thread Yichao Yu
On Wed, Jan 20, 2016 at 7:22 PM,   wrote:
> Julia's immutable types are value types and mutable types are reference
> types. In the Julia docs on types there is a paragraph that begins:
>
>  "It is instructive, particularly for readers whose background is C/C++, to
> consider why these two properties go hand in hand. [...]"
>
> However this paragraph only explains why value types should be immutable, it
> doesn't describe why you can't define a reference type that is immutable.
> So, why not?

Because there's no value type at all. A immutable reference type is
effectively a value type.


[julia-users] Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Kristoffer Carlsson
There is no need to annotate your function argument types so tightly, unless 
you have a good reason for it.

You will generate a lot of temporaries in your V = ...

Rewrite it as a loop and it will be a lot faster. You could also take a look at 
the Devectorize.jl package.

[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Dupont
HI,

here it is:

   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.2 (2015-12-06 21:47 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/   |  x86_64-apple-darwin13.4.0

julia> versioninfo
versioninfo (generic function with 4 methods)

julia> versioninfo()
Julia Version 0.4.2
Commit bb73f34 (2015-12-06 21:47 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin13.4.0)
  CPU: Intel(R) Core(TM) i7-4980HQ CPU @ 2.80GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)
  LAPACK: libopenblas64_
  LIBM: libopenlibm
  LLVM: libLLVM-3.3



[julia-users] Julia version of a crossword game notebook

2016-01-21 Thread Scott T
I saw this python notebook 
 by 
Peter Norvig. It is from a lesson in which the students develop a program 
to find highest-scoring plays in games like Scrabble and Words with 
Friends. The notebook is a refactoring of the code from the course.

I've been idly making a Julia version 
. Though it is 
in-progress, I thought I'd put it here. In particular, I like the 
usefulness of adding `show` and `writemime` methods to change how things 
display in the notebook. I have been trying to use useful features like 
multiple dispatch, shell commands, custom types, and docstrings. So far 
I've got to the stage of displaying the board and tiles. Once I finish it, 
it will be fun to see how fast it runs compared to the Python version, 
since the algorithm will be very similar.

If you are a Julia beginner-intermediate, you might find it fun to look at 
and build on. It uses Julia 0.4. 


Crossword Game Program 


[julia-users] Euler - Maruyama slower than python

2016-01-21 Thread Dupont
Hi,

I am trying to implement Euler-Maruyama method in Julia for a neural 
network and it is 4x slower than python.
Can someone give me a hint to fill that gap?

Thank you for your help,

Best regards,

### Python code
import numpy as np
import time 

def sig(x):
return 1.0/ (1. + np.exp(-x))

def mf_loop(Ndt,V0,V,dt,W,J):
sv = np.copy(V0)
V = np.copy(V0)

for i in range(Ndt):
sv = 1.0/ (1. + np.exp(-V))
V = V + ( -V+np.dot(J,sv) ) * dt + np.sqrt(dt) * W[:,i]

N = 100
# timestep
Ndt = 1 

sigma  = 0.1
W = (np.random.randn(N,Ndt)) * sigma
J = (np.random.rand(N,N))/N

V0 = np.random.rand(N)
V = np.copy(V0)

## JULIA CODE ##
import Base.LinAlg, Base.LinAlg.BlasReal, Base.LinAlg.BlasComplex

function sig(x::Union{Float64,Vector})
  return 1.0 ./ (1. + exp(-x))
end

function 
mf_loop(Ndt::Int64,V0::Vector{Float64},V::Vector{Float64},dt::Float64,W::Matrix{Float64},J::Matrix{Float64})
  sv = copy(V0)
  V  = copy(V0)

  for i=1:Ndt
sv = sig(V)
V = V + ( -V + J*sv ) * dt + W[:,i]
# BLAS.gemv!('N',dt,J,sv,1.0-dt,V)
# BLAS.axpy!(1., W[:,i], V)
  end
  nothing
end

N = 100
dt = 0.1
Ndt = 1 

sigma  = 0.1
W = (randn(N,Ndt)) * sigma * sqrt(dt)
J = (rand(N,N))/N

V0 = rand(N)
V = copy(V0)

mf_loop(1,V0,V,dt,W,J)
@time mf_loop(Ndt,V0,V,dt,W,J)


[julia-users] Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Dupont


Hi,

I want to implement Euler-Maruyama for a neural network. My implementation 
in numpy seems 4x faster than my Julia code 100ms vs 430ms
Can anyone give me a hint about how I can speed things up please?

Thank you for your help,

Best regards.




## Python code
import numpy as np
import time 


def sig(x):
return 1.0/ (1. + np.exp(-x))

def mf_loop(Ndt,V0,V,dt,W,J):
sv = np.copy(V0)
V = np.copy(V0)

for i in range(Ndt):
sv = 1.0/ (1. + np.exp(-V))
V = V + ( -V+np.dot(J,sv) ) * dt + np.sqrt(dt) * W[:,i]
 
N = 100
# timestep Euler Maruyama
dt = 0.1
Ndt = 1 

sigma  = 0.1
W = (np.random.randn(N,Ndt)) * sigma
J = (np.random.rand(N,N))/N

# initial condition
V0 = np.random.rand(N)
# result vector
V = np.copy(V0)

print "calcul\n"
tic = time.time()
mf_loop(Ndt,V0,V,dt,W,J)
toc = time.time()
print " -> dt = ", (toc-tic),"s"


### Julia code
import Base.LinAlg, Base.LinAlg.BlasReal, Base.LinAlg.BlasComplex

function sig(x::Union{Float64,Vector})
  return 1.0 ./ (1. + exp(-x))
end

function 
mf_loop(Ndt::Int64,V0::Vector{Float64},V::Vector{Float64},dt::Float64,W::Matrix{Float64},J::Matrix{Float64})
  sv = copy(V0)
  V  = copy(V0)

  for i=1:Ndt
sv = sig(V)
V = V + ( -V + J*sv ) * dt + W[:,i]
# BLAS.gemv!('N',dt,J,sv,1.0-dt,V)
# BLAS.axpy!(1., W[:,i], V)
  end
  nothing  
end

N = 100
dt = 0.1
Ndt = 1 


sigma  = 0.1
W = (randn(N,Ndt)) * sigma * sqrt(dt)
J = (rand(N,N))/N

   
V0 = rand(N)
V = copy(V0)


println( "calcul\n")
mf_loop(1,V0,V,dt,W,J)
@time mf_loop(Ndt,V0,V,dt,W,J)


[julia-users] How to data Geom.smooth Gadfly

2016-01-21 Thread jmarcellopereira
Geom.smooth create an fitting line on the points. How to get the data from 
this line?

ex
x = [0.0,0.2,0.4,0.8,1.0,1.2,1.4,1.6,2.0,2.2,2.6,2.8,3.0,3.4]
y = 
[-0.18344,-0.1311,0.02689,0.11053,0.25394,0.25719,0.53189,0.57905,0.93518,0.9166,1.13329,1.26893,1.10203,1.13392]

using Gadfly

Gadfly.plot(x=xdados,y=ydados,Geom.point,Geom.smooth)


[julia-users] Re: Adding processes to two different remote hosts.

2016-01-21 Thread Young-Jun Ko
Hello,
I have exactly the same problem.
have you found a work-around for this?

Thanks

On Thursday, August 13, 2015 at 6:16:37 PM UTC+2, Pere wrote:
>
>
>
> I have several servers that I'm planning to use to run some simulations in 
> Julia. The problem is, I can only add remote processes to a single server. 
> If i try to add the processes to the next server I get an error. This is 
> what I'm trying to do and what I get
>
> addprocs(["user@host1"], tunnel=true, dir="~/julia-483dbf5279/bin/", 
> sshflags=`-p 6969`)  
> addprocs(["user@host2"], tunnel=true, dir="~/julia-483dbf5279/bin/", 
> sshflags=`-p 6969`)
> id: cannot find name for group ID 350
> Error [connect: host is unreachable (EHOSTUNREACH)] on 3 while connecting 
> to peer 2. Exiting.
> Worker 3 terminated.
> ERROR (unhandled task failure): EOFError: read end of file
>  in read at stream.jl:810
>  in message_handler_loop at multi.jl:844
>  in process_tcp_streams at multi.jl:833
>  in anonymous at task.jl:67
>   
>
>
>
>
> The host is reachable and I can connect to it via ssh. I found on Github 
> [1] that the processes on different nodes need to be able to communicate 
> without ssh tunneling. All my serves are behind a firewall, so this is not 
> possible. Is there any alternative for adding multiple processes to 
> different servers? I'm using the latest nightly build of Julia, which is on 
> version 4.0.
>
> https://github.com/JuliaLang/julia/issues/6256
>


[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Pablo Zubieta
I am seeing the opposite on my machine (the Julia version being 4 times 
faster than the one in Python).

It might be a problem with the BLAS library that is being used. What is 
your platform and Julia version?


[julia-users] Type of composite type

2016-01-21 Thread pevnak
Hello,
I have a problem to which I have found a dirty solution and I am keen to 
know, if there is a principal one.

I have a composite type defined as

type Outer{T}
 A::T
 B::T
end

where A and and B are composite types

Then I want to create constructor
function Outer(k::Int)
  return(Outer(A{T}(k),B{T}(k))
end

But I have not find a way to put there the type information.
The only dirty hack I have come with to define the outer constructor as 

function Outer(k::Int;T::DataType=Float32)
  return(Outer(A{T}(k),B{T}(k))
end

But I do not like this solution too much. It is little awkward.

Thanks for suggesting a cleaner solution.

Best wishes,
Tomas



[julia-users] Argument types in ccall vs. llvmcall

2016-01-21 Thread Erik Schnetter
I notice that the argument types in ccall are passed as a tuple of
types, whereas the arguments to llvmcall are passed as a tuple type.
That is, there is e.g.

ccall(:foo, Int, (Int, Int), ...)

and

llvmcall("foo", Int, Tuple{Int, Int}, ...)

Why is this? Is this just for historic reasons? Should this be changed?

Note that the llvmcall code receives multiple arguments (%0, %1, ...),
not just a single tuple argument.

-erik

-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: How to combine @generated and @inline?

2016-01-21 Thread Erik Schnetter
Thanks for the pointer! `meta(:inline)` is easy enough to generate.

-erik

On Wed, Jan 20, 2016 at 9:55 PM, Jeffrey Sarnoff
 wrote:
> I'd prefer @inline @generated
> because @generated @inline seems to say "generate this function inline" (not
> "inline the function generated")
>
>
> On Wednesday, January 20, 2016 at 2:50:55 PM UTC-5, Matt Bauman wrote:
>>
>> Yeah, I was thinking about that as I responded.  An easier intermediate
>> solution (which could be implemented purely in the Julia macro) would be to
>> support `@inline quote … end`.  Either way, the semantics are a little
>> strange — you're not inlining the quote block, nor are you inlining the
>> function generator itself.
>>
>> On Wednesday, January 20, 2016 at 2:39:23 PM UTC-5, Stefan Karpinski
>> wrote:
>>>
>>> This seems like a viable feature request if you want to open an issue –
>>> i.e. @inline @generated or @generated @inline should arrange that the
>>> resulting function body be annotated appropriately.
>>>
>>> On Wed, Jan 20, 2016 at 2:35 PM, Matt Bauman  wrote:

 You need to manually attach the inline annotation within the function
 body that gets generated.  See, e.g.,
 https://github.com/JuliaLang/julia/blob/275c7e8929dd391960ba88e741c6f537ccca6cc9/base/multidimensional.jl#L233-L236

 On Wednesday, January 20, 2016 at 2:27:14 PM UTC-5, Erik Schnetter
 wrote:
>
> I have a generated function that generates a very small function
> (essentially a vector load instruction). Unfortunately, this function
> is not inlined, and I thus want to mark the generated function as
> @inline. How do I do so?
>
> Writing either "@inline @generated" or "@generated @inline" both fail.
>
> -erik
>
> --
> Erik Schnetter 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>>>
>>>
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Type of composite type

2016-01-21 Thread Mauro
Maybe you can adapt this:

julia> type Outer{T}
A::T
B::T
   end
julia> Outer{TT}(a::TT, b::TT) = Outer{TT}(a,b)
Outer{T}

## the first {TT} is a function type parameter whereas the second {TT}
## is the type parameter! Very confusing

julia> Outer(5,6)
Outer{Int64}(5,6)



Note that your last constructor suggests that your type may should look
like so:

type Outer{T}
  a::A{T}
  b::B{T}
end

But the gist of above should still work.  I think there is a section on
this in the manual.

On Thu, 2016-01-21 at 15:23, pev...@gmail.com wrote:
> Hello,
> I have a problem to which I have found a dirty solution and I am keen to
> know, if there is a principal one.
>
> I have a composite type defined as
>
> type Outer{T}
>  A::T
>  B::T
> end
>
> where A and and B are composite types
>
> Then I want to create constructor
> function Outer(k::Int)
>   return(Outer(A{T}(k),B{T}(k))
> end
>
> But I have not find a way to put there the type information.
> The only dirty hack I have come with to define the outer constructor as
>
> function Outer(k::Int;T::DataType=Float32)
>   return(Outer(A{T}(k),B{T}(k))
> end
>
> But I do not like this solution too much. It is little awkward.
>
> Thanks for suggesting a cleaner solution.
>
> Best wishes,
> Tomas


[julia-users] Re: ARM binaries for julia 0.4

2016-01-21 Thread Viral Shah
Thanks to Elliot's and Tony's efforts, and the flurry of recent arm 
activity, we now have arm nightlies set up on the buildbots and are 
available from here, as well as are linked on the downloads page.

https://status.julialang.org/download/linux-arm

This was the issue that was recently closed:
https://github.com/JuliaLang/julia/issues/8359

-viral

On Thursday, October 22, 2015 at 12:54:16 AM UTC+5:30, Tony Kelman wrote:
>
> Viral -
>
> What Linux distribution, GCC version, etc was used to build these?
>
> -Tony
>
>
> On Friday, October 2, 2015 at 5:50:05 PM UTC-7, Viral Shah wrote:
>>
>> Folks, please try out our first ARM binaries. Elliot is working on making 
>> this part of the standard build so that we can get them automatically, but 
>> in the meanwhile, here’s something to get things going. 
>>
>> https://drive.google.com/open?id=0B0rXlkvSbIfhT1lTY0VVRTdvaVE 
>>
>> Please check the arm label on github for issues that you may encounter, 
>> and file a new issue if necessary. 
>>
>> -viral 
>>
>>
>>
>>

Re: [julia-users] immutability, value types and reference types?

2016-01-21 Thread Erik Schnetter
"Immutable" in this sense simply means that the objects are constant.
One could still meaningfully distinguish between equality (==) and
identity (===). My guess is that there's no urgent need for this in
Julia -- as a work-around, you can use a mutable type, and then not
modify objects once created.

-erik

On Thu, Jan 21, 2016 at 8:23 AM, Yichao Yu  wrote:
> On Wed, Jan 20, 2016 at 7:22 PM,   wrote:
>> Julia's immutable types are value types and mutable types are reference
>> types. In the Julia docs on types there is a paragraph that begins:
>>
>>  "It is instructive, particularly for readers whose background is C/C++, to
>> consider why these two properties go hand in hand. [...]"
>>
>> However this paragraph only explains why value types should be immutable, it
>> doesn't describe why you can't define a reference type that is immutable.
>> So, why not?
>
> Because there's no value type at all. A immutable reference type is
> effectively a value type.



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Optimizing Function Injection in Julia

2016-01-21 Thread Bryan Rivera
The goal is to remove all overhead associated with using Nullable, since we 
know at compile time what path (in the if branch) will result in None vs 
Some.  Since we know this, we can simply inject `function2` into 
`function1`.  However, we have an issue that `function2 ` requires the 
variable `z`.

I found Monad.jl but it is out of date and does not optimize the right way. 
 In fact, it uses so many deeply nested anonymous functions that the 
performance suffers vs other approaches.

I am looking for something that would optimize this:


  function function1(a, b)
if(a > b)
  c = a + b
  d = a - b
  return (c, d)
else
  # do anything
  # but return nothing
end
  end


  function function2(c, d, z)
return c + d + z
  end


  a = 1
  b = 2
  z = 10


  @mdo Maybe begin
(c, d) <- Maybe( function1(a, b) )
res <- function2(c, d, z)
  end


Into this:

  a = 1
  b = 2
  z = 10


  function1(a, b, z)


  function function1(a, b, z)
if(a > b)
  c = a + b
  return function2(c, d, z)
else
  # do anything
  # but return nothing
end
  end


Notice how `function2` was inserted, and also how `function1` has the `z` 
variable added to its parameters.  But again, what is the limit on adding 
variables to the parameters?  And it looks efficient to me, but is it 
really?

This saves us the overhead of using `Nullable{T}`.  And it is an 
optimization we can do at parse time.

**Option 1**: So we could create a function with z already filled in, but 
this would result in the generation of many different functions, for 
different values of z.

**Option 2**: Or we can use Nullable, and safely parse the result, but this 
also introduces unnecessary overhead.

**Option 3**:  Can use macros, but firstly what is the limit on function 
parameters?  Secondly, is there a better way?

I really like the Monad comprehension syntax, its probably the best way to 
describe what we want to do.

Any equivalent of this optimization would be nice, maybe something using 
anon funs, but done correctly.


[julia-users] Re: How to data Geom.smooth Gadfly

2016-01-21 Thread Diego Javier Zea
Hi!

You should check the Loess package: https://github.com/dcjones/Loess.jl

Best,

El jueves, 21 de enero de 2016, 6:39:32 (UTC-3), jmarcell...@ufpi.edu.br 
escribió:
>
> Geom.smooth create an fitting line on the points. How to get the data from 
> this line?
>
> ex
> x = [0.0,0.2,0.4,0.8,1.0,1.2,1.4,1.6,2.0,2.2,2.6,2.8,3.0,3.4]
> y = 
> [-0.18344,-0.1311,0.02689,0.11053,0.25394,0.25719,0.53189,0.57905,0.93518,0.9166,1.13329,1.26893,1.10203,1.13392]
>
> using Gadfly
>
> Gadfly.plot(x=xdados,y=ydados,Geom.point,Geom.smooth)
>


[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Pablo Zubieta
I just stumble upon this comment 
 on 
this GitHub issue . It 
seems that the current situation in Julia won't let you match the numpy 
code yet (until that issue is solved). But from the comment in the link, 
the following should be faster (it is at least on my machine).

### Julia code
import Base.LinAlg, Base.LinAlg.BlasReal, Base.LinAlg.BlasComplex

Base.disable_threaded_libs()

function sig(x::Union{Float64,Vector})
  return 1.0 ./ (1. + exp(-x))
end

function mf_loop(Ndt::Int64,V0::Vector{Float64},V::Vector{Float64},dt::
Float64,W::Matrix{Float64},J::Matrix{Float64})
sv = copy(V0)
V  = copy(V0)
for i=1:Ndt
#sv = sig(V)
#V = V + ( -V + J*sv ) * dt + W[:,i]
BLAS.gemv!('N',dt,J,sv,1.0-dt,V)
BLAS.axpy!(1., W[:,i], V)
end
nothing 
end

N = 100
dt = 0.1
Ndt = 1

sigma  = 0.1
W = (randn(N,Ndt)) * sigma * sqrt(dt)
J = (rand(N,N))/N
  
V0 = rand(N)
V = copy(V0)

println( "calcul\n")
mf_loop(1,V0,V,dt,W,J)
@time mf_loop(Ndt,V0,V,dt,W,J)

Cheers.


Re: [julia-users] Type of composite type

2016-01-21 Thread Yichao Yu
On Thu, Jan 21, 2016 at 9:23 AM,   wrote:
> Hello,
> I have a problem to which I have found a dirty solution and I am keen to
> know, if there is a principal one.
>
> I have a composite type defined as
>
> type Outer{T}
>  A::T
>  B::T
> end
>
> where A and and B are composite types
>
> Then I want to create constructor
> function Outer(k::Int)
>   return(Outer(A{T}(k),B{T}(k))
> end

call{T}(::Type{Outer{T}}, k::Int) = Outer(A{T}(k), B{T}(k))

Outer{Float64}(1)

Syntax might change again with jb/function

>
> But I have not find a way to put there the type information.
> The only dirty hack I have come with to define the outer constructor as
>
> function Outer(k::Int;T::DataType=Float32)
>   return(Outer(A{T}(k),B{T}(k))
> end
>
> But I do not like this solution too much. It is little awkward.
>
> Thanks for suggesting a cleaner solution.
>
> Best wishes,
> Tomas
>


[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Dupont
Hi,

Thank you for your answer. It is a bit faster for me too but still 3x 
slower than numpy (note that you have to uncomment #sv = sig(V) in your 
code.

Best regards,


[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Pablo Zubieta
I guess that you can also throw a couple of @inbounds before the for loops 
in Mauro's solution to improve things a bit more.


Re: [julia-users] how to i get number of arguments of a function?

2016-01-21 Thread amiksvi
I'm using Julia v0.4.2 and I can't compute the length of what m.sig returns:

julia> function f(x)
   end

julia> for m in methods(f)
 println(length(m.sig))
   end
ERROR: MethodError: `length` has no method matching length(::Type{Tuple{
Int64,Int64}})

Since I'm trying to use the function length on a Type, and not on an 
instance of said Type. How can I do that?

Many thanks,


Re: [julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Mauro
On my system Julia is also ~4x faster than Python, so it seems that this
is mainly a problem with the BLAS library.  Did you build Julia
yourself or are you using binaries?

I see some improvements (20%) using an in-place sig function and some
other bits.  Together with Pablo's trick this halfs execution time:

import Base.LinAlg, Base.LinAlg.BlasReal, Base.LinAlg.BlasComplex
Base.disable_threaded_libs()

function sig!(out, x::Vector)
for i=1:length(x)
out[i] =  1.0 ./ (1. + exp(-x[i]))
end
nothing
end

function 
mf_loop(Ndt::Int64,V0::Vector{Float64},V::Vector{Float64},dt::Float64,W::Matrix{Float64},J::Matrix{Float64})
  sv = copy(V0)
  V  = copy(V0)

  for i=1:Ndt

  # sv = sig(V)
  # V = (1-dt)*V +  J*sv * dt + W[:,i]
sig!(sv,V)
BLAS.gemv!('N',dt,J,sv,1.0-dt,V)
  for j=1:length(V)
  V[j] += W[j,i]
  end
  end
  nothing
end

...


On Thu, 2016-01-21 at 17:10, Dupont  wrote:
> Hi,
>
> Thank you for your answer. It is a bit faster for me too but still 3x
> slower than numpy (note that you have to uncomment #sv = sig(V) in your
> code.
>
> Best regards,


Re: [julia-users] Argument types in ccall vs. llvmcall

2016-01-21 Thread Stefan Karpinski
The type of a tuple used to be a tuple of types but that was changed in 0.4
– the way ccall is invoked is a vestige of that, but a convenient one.

On Thu, Jan 21, 2016 at 10:11 AM, Erik Schnetter 
wrote:

> I notice that the argument types in ccall are passed as a tuple of
> types, whereas the arguments to llvmcall are passed as a tuple type.
> That is, there is e.g.
>
> ccall(:foo, Int, (Int, Int), ...)
>
> and
>
> llvmcall("foo", Int, Tuple{Int, Int}, ...)
>
> Why is this? Is this just for historic reasons? Should this be changed?
>
> Note that the llvmcall code receives multiple arguments (%0, %1, ...),
> not just a single tuple argument.
>
> -erik
>
> --
> Erik Schnetter 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


Re: [julia-users] how to i get number of arguments of a function?

2016-01-21 Thread Yichao Yu
On Thu, Jan 21, 2016 at 11:35 AM,   wrote:
> I'm using Julia v0.4.2 and I can't compute the length of what m.sig returns:
>
> julia> function f(x)
>end
>
> julia> for m in methods(f)
>  println(length(m.sig))
>end
> ERROR: MethodError: `length` has no method matching
> length(::Type{Tuple{Int64,Int64}})
>
> Since I'm trying to use the function length on a Type, and not on an
> instance of said Type. How can I do that?

length(m.sig.parameters)

>
> Many thanks,


Re: [julia-users] immutability, value types and reference types?

2016-01-21 Thread Stefan Karpinski
Semantically all objects are reference types in Julia. It just happens that
if they're immutable the system is free to copy them or not since there's
no way to tell the difference.

On Thu, Jan 21, 2016 at 9:21 AM, Erik Schnetter  wrote:

> "Immutable" in this sense simply means that the objects are constant.
> One could still meaningfully distinguish between equality (==) and
> identity (===). My guess is that there's no urgent need for this in
> Julia -- as a work-around, you can use a mutable type, and then not
> modify objects once created.
>
> -erik
>
> On Thu, Jan 21, 2016 at 8:23 AM, Yichao Yu  wrote:
> > On Wed, Jan 20, 2016 at 7:22 PM,   wrote:
> >> Julia's immutable types are value types and mutable types are reference
> >> types. In the Julia docs on types there is a paragraph that begins:
> >>
> >>  "It is instructive, particularly for readers whose background is
> C/C++, to
> >> consider why these two properties go hand in hand. [...]"
> >>
> >> However this paragraph only explains why value types should be
> immutable, it
> >> doesn't describe why you can't define a reference type that is
> immutable.
> >> So, why not?
> >
> > Because there's no value type at all. A immutable reference type is
> > effectively a value type.
>
>
>
> --
> Erik Schnetter 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


Re: [julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Dupont
@Mauro,

I am using the dmg located at 
https://s3.amazonaws.com/julialang/bin/osx/x64/0.4/julia-0.4.3-osx10.7+.dmg
My timings using your code is .330s vs .1s for numpy.

I would be happy if one told me that Julia has not caught yet, but the fact 
that you have a speedup is troubling.
I have a MacBook Pro (Retina, 15-inch, Mid 2015), on osx 10.11

Thank you for your help,


Re: [julia-users] how to i get number of arguments of a function?

2016-01-21 Thread amiksvi
Great, thanks a lot.
In fact, I also need to evaluate the number of arguments of an anonymous 
function:

julia> function factory(y)
 return x -> x + y
   end
factory (generic function with 1 method)

julia> type Foo
 f::Function
   end

julia> foo = Foo(factory(2))
Foo((anonymous function))

julia> methods(foo.f)
ERROR: ArgumentError: argument is not a generic function
 in methods at reflection.jl:180


Any way to do that..?



Re: [julia-users] numpy vs julia benchmarking for random matrix-vector multiplication

2016-01-21 Thread Dupont
Silly me...

I was using python anaconda and it seems to be user more than one process...

I apologize for this,

Thanks everybody for your help,

Best


Re: [julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Stefan Karpinski
Compiling from source should be a matter of cloning and running `make`. It
will take a while and use CPU though.

On Thu, Jan 21, 2016 at 2:47 PM, Mauro  wrote:

> My timings are (on linux, 2012 i7 CPU)
> - 0.14s original
> - 0.05 my latest version
> - 0.24 python (2 & 3)
>
> You could try to compile it yourself and see if that is faster.  But I'm
> not sure how much hassle that is on OSX.
>
> Maybe easier/quicker would be to install the 0.3 dmg.  See whether that
> gives better timing.
>
> Maybe someone how makes the .dmg can comment?
>
> On Thu, 2016-01-21 at 19:06, Dupont  wrote:
> > @Mauro,
> >
> > I am using the dmg located at
> >
> https://s3.amazonaws.com/julialang/bin/osx/x64/0.4/julia-0.4.3-osx10.7+.dmg
> > My timings using your code is .330s vs .1s for numpy.
> >
> > I would be happy if one told me that Julia has not caught yet, but the
> fact
> > that you have a speedup is troubling.
> > I have a MacBook Pro (Retina, 15-inch, Mid 2015), on osx 10.11
> >
> > Thank you for your help,
>


Re: [julia-users] numpy vs julia benchmarking for random matrix-vector multiplication

2016-01-21 Thread Mauro
> Silly me...
>
> I was using python anaconda and it seems to be user more than one process...

No, OpenBLAS used by Julia should also use several threads.

You can change the number of threads like so:

~/julia/tmp >> export OPENBLAS_NUM_THREADS=2
~/julia/tmp >> julia5 tt.jl

I actually get the best results for one thread, at leas for you matrix
size you use.


Re: [julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Dupont
Hi,

I was using anaconda with more than one thread... That may explain the 
differences in timing.

I aopologize for this and thank everybody for your help,

Best regards.


[julia-users] Performance and Size problem with JLD saving DecisionTree model

2016-01-21 Thread Ian Watson
Using DecisionTree to build a random forest model.  Small, 200 items, 664 
predictors for each item, input file size under 1 MB

I can build a random forest model with 1000 trees in about 8 seconds - 
great.

@time model=build_forest(yvalues[:,1],features,2,1000,0.5)

Then I tried to save that model for subsequent scoring by writing it to a 
JLD file.

Writing to an NFS mounted disk took multiple minutes, while writing a 194MB 
(!!) file.

If I write that to /dev/shm, it still takes 51 seconds (and still 194MB)

@time save("/dev/shm/foo.jld","model",model)
 51.406531 seconds (12.01 M allocations: 465.667 MB, 0.38% gc time)

When I do something comparable in R with the same dataset, build the model 
and then use save() to save the model and the features, the whole process 
takes about 14 seconds, and is 2.8MB on disk. The save() part of the 
processing is very fast.

whos() shows

 model   6884 KB DecisionTree.Ensemble

so if this is a good estimate of memory, I don't think the problem is with 
the DecisionTree object.

Am I doing something wrong, or is JLD doing something horrible?

Saw this. https://github.com/JuliaLang/julia/issues/7893, so perhaps 
problems still persist?




Re: [julia-users] interp1-like functionality from Grid.jl

2016-01-21 Thread Tim Holy
Now you should probably be using Interpolations.jl.

--Tim

On Wednesday, January 20, 2016 05:35:53 PM argel.rami...@ciencias.unam.mx 
wrote:
> Cant get it to work, maybe i'm not understanding something. Can you help me?
> On Julia V 4.0.2:
> 
> AltInterp =
> 
> > CoordInterpGrid(Znw,squeeze(Altura_segun_corte[xx,yy,:],(1,2)),BCnil,Inter
> > pLinear);
> throws:
> > ERROR: LoadError: MethodError: `convert` has no method matching
> > convert(::Type{Grid.CoordInterpGrid{T<:AbstractFloat,N,BC<:Grid.BoundaryCo
> > ndition,IT<:Grid.InterpType,R}},> 
> > ::Array{Float64,1}, ::Array{Float64,1}, ::Type{Grid.BCnil},
> > ::Type{Grid.InterpLinear})
> > 
> > This may have arisen from a call to the constructor
> > Grid.CoordInterpGrid{T<:AbstractFloat,N,BC<:Grid.BoundaryCondition,IT<:Gri
> > d.InterpType,R}(...), since type constructors fall back to convert
> > methods.
> > 
> > Closest candidates are:
> >   Grid.CoordInterpGrid{N,T<:AbstractFloat}(::NTuple{N,Range{T}},
> >   
> > ::Array{T<:AbstractFloat,N}, ::Any...)
> > ::
> >   Grid.CoordInterpGrid{R<:Range{T},T<:AbstractFloat}(::R<:Range{T},
> >   
> > ::Array{T<:AbstractFloat,1}, ::Any...)
> > ::
> >   call{T}(::Type{T}, ::Any)
> >   ...
> >  
> >  in call at essentials.jl:57
> >  [inlined code] from Cortes_horizontales_U_V_mod2.jl:46
> >  in anonymous at no file:0
> >  in include at ./boot.jl:261
> >  in include_from_node1 at ./loading.jl:304
> > 
> > while loading Cortes_horizontales_U_V_mod2.jl, in expression starting on
> > line 42
> 
> I can't understand nothing from this error message! my variables in here
> 
> are:
> >  julia> typeof(Znw)
> > 
> > Array{Float64,1}
> > 
> > julia> typeof(squeeze(Altura_segun_corte[1,1,:],(1,2)))
> > Array{Float64,1}
> > 
> > julia> sizeof(Znw)
> > 224
> > 
> > julia> sizeof(squeeze(Altura_segun_corte[1,1,:],(1,2)))
> > 224
> > 
> > What would the problem be?
> 
> El sábado, 19 de abril de 2014, 11:06:30 (UTC-5), Simon Byrne escribió:
> > I actually wanted this functionality myself. See
> > https://github.com/timholy/Grid.jl/pull/14
> > 
> > On Thursday, 17 April 2014 13:26:57 UTC+1, Tim Holy wrote:
> >> That's fine. That's how it always works; things happen in Julia when
> >> someone
> >> finds the time to do them.
> >> 
> >> --Tim
> >> 
> >> On Wednesday, April 16, 2014 10:07:46 PM Spencer Lyon wrote:
> >> > I'd love to beef up this wrapper type and add it to grid, but
> >> 
> >> unfortunately
> >> 
> >> > I wont' be able to get to it for a while -- probably late June.
> >> > 
> >> > On Tuesday, April 15, 2014 9:06:57 AM UTC-4, Tim Holy wrote:
> >> > > On Tuesday, April 15, 2014 05:35:27 AM Spencer Lyon wrote:
> >> > > > It seems to me that this would be fairly standard functionality. I
> >> 
> >> am
> >> 
> >> > > sure
> >> > > 
> >> > > > there is a benefit to having the default getindex methods deal in
> >> 
> >> “index
> >> 
> >> > > > units” instead of physical ones, but I can’t tell what that benefit
> >> 
> >> is?
> >> 
> >> > > Is
> >> > > 
> >> > > > there a reason you chose to have it set up the way it is?
> >> > > 
> >> > > When physical units = indexing units, you save one multiply and one
> >> 
> >> add on
> >> 
> >> > > each interpolation operation. So it's best to implement the base
> >> 
> >> operation
> >> 
> >> > > "minimally," and add wrapper types that require more operations
> >> 
> >> around it.
> >> 
> >> > > I've not personally ever needed anything else (I mostly do
> >> 
> >> interpolation
> >> 
> >> > > on
> >> > > images), and no one else has added it to Grid, either.
> >> > > 
> >> > > If you wanted to add your wrapper type to Grid, I think that would be
> >> > > great.
> >> > > Some additional things to think about:
> >> > > - Derivatives (here, the chain rule is your friend)
> >> > > - Dimensions higher than 1
> >> > > - It's no longer just a shift, it's also scaled, so a name change
> >> 
> >> might be
> >> 
> >> > > in
> >> > > order.
> >> > > 
> >> > > --Tim



Re: [julia-users] Ambiguous methods warnings for DataFrames and Images

2016-01-21 Thread Tim Holy
There is a solution, but no one (including me) has yet gotten around to it:

https://github.com/JuliaStats/DataArrays.jl/issues/168

The more general solution is to turn ambiguity warnings into automatically- 
(and silently-)generated "stub" functions that throw an error when called. 
However, I think that's waiting on an overhaul of the type system.

Best,
--Tim

On Thursday, January 21, 2016 01:44:49 PM Mauro wrote:
> It's a known wart: https://github.com/JuliaLang/julia/issues/6190
> 
> But as far as I recall that issue thread, no solution has been found yet.
> 
> On Thu, 2016-01-21 at 13:32, cormull...@mac.com wrote:
> > Just wondering if there's a solution in the future for this trivial but 
mildly irritating problem:
> > julia> using DataFrames, Images
> > WARNING: New definition
> > 
> > .+(Images.AbstractImageDirect, AbstractArray) at
> > /Users/me/.julia/v0.4/Images/src/algorithms.jl:22> 
> > is ambiguous with:
> > .+(AbstractArray, Union{DataArrays.PooledDataArray,
> > DataArrays.DataArray}, AbstractArray...) at
> > /Users/me/.julia/v0.4/DataArrays/src/broadcast.jl:297.> 
> > To fix, define
> > 
> > .+(Images.AbstractImageDirect, Union{DataArrays.PooledDataArray,
> > DataArrays.DataArray})> 
> > before the new definition.
> > 
> > and so on for 150 lines. It's not a major problem, of course (I just
> > ignore it). But I'm curious as to what the fix will be.



[julia-users] How to use a variable for filename in @Logging.configure?

2016-01-21 Thread jock . lawrie
Hi all,

I am using Logging.jl as follows:

@Logging.configure(filename = "path/to/repo/mylogfile.log", level = INFO)

This works fine. However I would like to set the log file name as a global 
variable (const my_log_file = "path/to/repo/mylogfile.log") and then call a 
script that includes the following line:

@Logging.configure(filename = my_log_file, level = INFO)

When I do this I get the following error:

ERROR: LoadError: UndefVarError: my_log_file not defined

Any ideas?

Thanks,
Jock


Re: [julia-users] Performance and Size problem with JLD saving DecisionTree model

2016-01-21 Thread Tim Holy
Depends on the internal storage of the random forest model. You might need to 
create a custom serializer:
https://github.com/JuliaLang/JLD.jl/blob/master/doc/jld.md#custom-serialization

--Tim

On Thursday, January 21, 2016 12:57:08 PM Ian Watson wrote:
> Using DecisionTree to build a random forest model.  Small, 200 items, 664
> predictors for each item, input file size under 1 MB
> 
> I can build a random forest model with 1000 trees in about 8 seconds -
> great.
> 
> @time model=build_forest(yvalues[:,1],features,2,1000,0.5)
> 
> Then I tried to save that model for subsequent scoring by writing it to a
> JLD file.
> 
> Writing to an NFS mounted disk took multiple minutes, while writing a 194MB
> (!!) file.
> 
> If I write that to /dev/shm, it still takes 51 seconds (and still 194MB)
> 
> @time save("/dev/shm/foo.jld","model",model)
>  51.406531 seconds (12.01 M allocations: 465.667 MB, 0.38% gc time)
> 
> When I do something comparable in R with the same dataset, build the model
> and then use save() to save the model and the features, the whole process
> takes about 14 seconds, and is 2.8MB on disk. The save() part of the
> processing is very fast.
> 
> whos() shows
> 
>  model   6884 KB DecisionTree.Ensemble
> 
> so if this is a good estimate of memory, I don't think the problem is with
> the DecisionTree object.
> 
> Am I doing something wrong, or is JLD doing something horrible?
> 
> Saw this. https://github.com/JuliaLang/julia/issues/7893, so perhaps
> problems still persist?



[julia-users] Parallel coputing for vector?

2016-01-21 Thread Yao Lu
I want to parallelly compute v+=g(v) while v is a large-dimentional vector. 
But I'd like to stick to the vector formalism so I don't want to change to 
for-loop and use @parallel. How could I do this?


[julia-users] Julia Dict key in keys() but not in sort(collect(keys()))

2016-01-21 Thread Michael Lindon
I have a problem I don't quite understand. Here is an example:

julia> badnum∈keys(g)
false
julia> badnum∈sort(collect(keys(g)))
true
julia> badnum
0.9122066068007542

Does the collecting the keys do some type conversion which introduces some 
rounding? I have defined g as Dict{Float64,Float64}()


[julia-users] Re: Optimizing Function Injection in Julia

2016-01-21 Thread Bryan Rivera
I think what I wrote above might be too complicated, as it is an attempt to 
solve this problem.

In essence this is what I want: 



function function1(a, b, onGreaterThanCallback)
  if(a > b)
c = a + b
res = onGreaterThanCallback(c, z)
return res + 1
  else
# do anything
# but return nothing
  end
end


global onGreaterThanCallback = (c) -> c + z

function1(a, b, onGreaterThanCallback)


Problems:

The global variable.

The anonymous function which has performance impact (vs other approaches). 
 We could use Tim Holy's @anon, but then the value of `z` is fixed at 
function definition, which we don't always want.

I think that the ideal optimization would look like this:

  function function1(a, b, z) # Variable needed in callback fun 
injected.
if(a > b)
  c = a + b
  res = c + z # Callback function has been injected.
  return res + 1
else
  # do anything
  # but return nothing
end
  end


  function1(a, b, z)

In OO languages we would be using an abstract class or its equivalent.  But 
I've thought about it, and read the discussions on interfaces, and don't 
see those solutions optimizing the code out like I did above.

Any ideas?


Re: [julia-users] immutability, value types and reference types?

2016-01-21 Thread asynth


On Thursday, January 21, 2016 at 8:48:19 AM UTC-8, Stefan Karpinski wrote:
>
> Semantically all objects are reference types in Julia. It just happens 
> that if they're immutable the system is free to copy them or not since 
> there's no way to tell the difference.
>

So how do you make Haskell-like immutable reference objects, i.e. trees of 
immutable objects?
When can you tell if an immutable object is inlined in its container or is 
a reference, or are you saying they are always by reference and never 
inlined in the containing object?


>

[julia-users] Issue with return values from subtypes

2016-01-21 Thread Scott Jones
I ran across something strange today, with some of the test code, that used 
something like:
for x in [subtypes(Real) ; subtypes(Complex)]
...
end

The issue is that subtypes(Real) returns:

*julia> **subtypes(Real)*

*4-element Array{Any,1}:*

* AbstractFloat   *

* Integer *

* Irrational{sym} *

* Rational{T<:Integer}*

If instead of having subtypes(Real), I try to put the types in directly, it 
won't accept either Irrational{sym} or Rational{T<:Integer}.
Is there a correct way to represent those?  Should Irrational{sym} be 
something like Irrational{::Symbol}?
Is this a bug in subtypes?

Thanks, Scott



[julia-users] Re: Optimizing Function Injection in Julia

2016-01-21 Thread Bryan Rivera
Guys, it's killing me having to wait hours until my posts are approved.

(Ability to edit would be nice as well.. Although it looks like I can edit 
all of my posts save for the original.)

What must be done to overcome this limit?


[julia-users] ANN: Julia "lite" branch available

2016-01-21 Thread Scott Jones
This is still a WIP, and can definitely use some more work in 1) testing on 
other platforms 2) better disentangling of documentation 3) advice on how 
better to accomplish it's goals. 4) testing with different subsets of 
functionality turned on (I've tested just with BUILD_FULL disabled ("lite" 
version), or enabled (same as master) so far.

This branch (spj/lite in ScottPJones 
repository, https://github.com/ScottPJones/julia/tree/spj/lite) by default 
will build a "lite" version of Julia, and by putting
override BUILD_xxx = 1
lines in Make.user, different functionality can be built back in (such as 
BigInt, BigFloat, LinAlg, Float16, Mmap, Threads, ...).  See Make.inc for 
the full list.

I've also made it so that all unit tests pass (that don't use disabled 
functionality).
(the hard part there was that testing can be spread all over the place, 
esp. for BigInt, BigFloat, Complex, and Rational types).

It will also not build libraries such as arpack, lapack, openblas, fftw, 
suitesparse, mpfr, gmp, depending on what BUILD_* options have been set.

This is only a first step, the real goal is to be able to have a minimal 
useful core, that can have the other parts easily added, in such a way that 
they still appear to have
been defined completely in Base.
One place where I think this can be very useful is for building minimal 
versions of Julia to run on things like the Raspberry Pi.

-Scott




[julia-users] Re: Optimizing Function Injection in Julia

2016-01-21 Thread Cedric St-Jean
Something like this?

function function1(a, b, f) # Variable needed in callback fun injected.
if(a > b)
  c = a + b
  res = f(c) # Callback function has been injected.
  return res + 1
else
  # do anything
  # but return nothing
end
end

type SomeCallBack
z::Int
end
Base.call(callback::SomeCallBack, c) = c + callback.z

function1(2, 1, SomeCallBack(10))

Because of JIT, this is 100% equivalent to your "callback function has been 
injected" example, performance-wise. My feeling is that .call overloading 
is not to be abused in Julia, so I would favor using a regular function 
call with a descriptive name instead of call overloading, but the same 
performance guarantees apply. Does that answer your question?

On Thursday, January 21, 2016 at 9:02:50 PM UTC-5, Bryan Rivera wrote:
>
> I think what I wrote above might be too complicated, as it is an attempt 
> to solve this problem.
>
> In essence this is what I want: 
>
>
>
> function function1(a, b, onGreaterThanCallback)
>   if(a > b)
> c = a + b
> res = onGreaterThanCallback(c, z)
> return res + 1
>   else
> # do anything
> # but return nothing
>   end
> end
>
>
> global onGreaterThanCallback = (c) -> c + z
>
> function1(a, b, onGreaterThanCallback)
>
>
> Problems:
>
> The global variable.
>
> The anonymous function which has performance impact (vs other approaches). 
>  We could use Tim Holy's @anon, but then the value of `z` is fixed at 
> function definition, which we don't always want.
>
> I think that the ideal optimization would look like this:
>
>   function function1(a, b, z) # Variable needed in callback fun 
> injected.
> if(a > b)
>   c = a + b
>   res = c + z # Callback function has been injected.
>   return res + 1
> else
>   # do anything
>   # but return nothing
> end
>   end
>
>
>   function1(a, b, z)
>
> In OO languages we would be using an abstract class or its equivalent. 
>  But I've thought about it, and read the discussions on interfaces, and 
> don't see those solutions optimizing the code out like I did above.
>
> Any ideas?
>


Re: [julia-users] Checking for boxing

2016-01-21 Thread Jameson
Yichao is correct. The `box` intrinsic isn't particularly well named and 
has a bit of an identity crisis -- if it throws an error, it calls itself 
`reinterpret`, which is closer to it's real purpose, which is to change the 
declared type of some bits.

On Thursday, January 14, 2016 at 8:26:39 PM UTC-5, Yichao Yu wrote:
>
> On Thu, Jan 14, 2016 at 7:48 PM, Spencer Russell  
> wrote: 
> > Is there a way to check for whether values are getting boxed/unboxed 
> short 
> > of looking at the llvm code? As an example, here’s a simple function 
> that I 
> > wouldn’t expect to need any boxes: 
> > 
> > julia> function isnullptr{T}(x::Ptr{T}) 
> >  x == Ptr{T}(0) 
> >end 
> > isnullptr (generic function with 1 method) 
> > 
> > But when I look at it with `code_warntype` it shows a comparison between 
> the 
> > boxed versions of the pointers: 
> > 
> > julia> code_warntype(isnullptr, (Ptr{Void}, )) 
> > Variables: 
> >   x::Ptr{Void} 
> > 
> > Body: 
> >   begin  # none, line 2: 
> >   return (Base.box)(UInt64,x::Ptr{Void}) === 
> > (Base.box)(UInt64,(Base.box)(Ptr{Void},0))::Bool 
> >   end::Bool 
> > 
> > But by the time the code gets generated I can rest easy that all is 
> well: 
> > 
> > julia> code_llvm(isnullptr, (Ptr{Void}, )) 
> > 
> > define i1 @julia_isnullptr_21821(i8*) { 
> > top: 
> >   %1 = icmp eq i8* %0, null 
> >   ret i1 %1 
> > } 
> > 
> > So in this case the end result was what I expected, but I’m currently 
> trying 
> > to debug some problems with a larger chunk of code and am not very 
> > llmv-literate, so I’m wondering if there’s another way to check. 
> > 
> > Interestingly, a more general zero-checker: iszero{T}(x::T) = x == T(0) 
> > doesn’t show boxing in the code_warntype output if T is an Int, but does 
> if 
> > it's Ptr{Void} type, so maybe this is somewhat pointer-specific? 
> > Unfortunately the code I’m looking at does a lot of pointer wrangling. 
>
> Ignore the `Base.box` in the typed ast. I don't think they will cause 
> heap allocation unless there's type instability. 
>
> > 
> > Thanks, 
> > Spencer 
>


Re: [julia-users] Euler - Maruyama slower than python

2016-01-21 Thread Steven G. Johnson
Yichao is right: contrary to common misconception, declaring argument types 
in Julia has nothing to do with performance.  Functions are specialized for 
the argument types that you pass at compile-time anyway.   You declare 
argument types for three reasons: (a) controlling dispatch (having 
different versions of a function for different types), (b) ensuring 
correctness (if your function would work but give an unexpected answer for 
some types), and (c) clarity (to indicate "hey, I really want you to pass 
some kind of integer here" etc.).

You aren't going to get a lot of performance gains for this particular 
function, because it vectorizes well already, so you are taking good 
advantage of fast code (written in C) in Python.   The big performance 
advantage of Julia is for problems that don't vectorize well, so that you 
are forced to write your own inner loops.

However, I do get some speedup for your function in Julia by unrolling and 
merging some of the loops.  It also cuts down a lot on the memory usage by 
removing temporary allocations.   The speedup is particularly noticable if 
you are interested in single-core performance: call blas_set_num_threads(1) 
to use the single-threaded BLAS.

function mf_loop2(Ndt,V0,dt,W,J)
V = copy(V0)
n = length(V)
Jsv = copy(V0)
@inbounds for i=1:Ndt
# compute Jsv = J*sig(V), in-place
fill!(Jsv, 0)
for k = 1:n
sv = sig(V[k])
@simd for j = 1:n
Jsv[j] += J[j,k] * sv
end
end

# compute V += ( -V + J*sv ) * dt + W[:,i], in-place
@simd for j = 1:n
V[j] += (Jsv[j] - V[j]) * dt + W[j,i]
end
end
return V
end



Re: [julia-users] Issue with return values from subtypes

2016-01-21 Thread Mauro
you mean why the second errors:

julia> Irrational
Irrational{sym}

julia> Irrational{sym}
ERROR: UndefVarError: sym not defined

?  Irrational is all that is needed. Would this help:

julia> Irrational{TypeVar(:T)}
Irrational{T}

On Thu, 2016-01-21 at 19:21, Scott Jones  wrote:
> I ran across something strange today, with some of the test code, that used
> something like:
> for x in [subtypes(Real) ; subtypes(Complex)]
> ...
> end
>
> The issue is that subtypes(Real) returns:
>
> *julia> **subtypes(Real)*
>
> *4-element Array{Any,1}:*
>
> * AbstractFloat   *
>
> * Integer *
>
> * Irrational{sym} *
>
> * Rational{T<:Integer}*
>
> If instead of having subtypes(Real), I try to put the types in directly, it
> won't accept either Irrational{sym} or Rational{T<:Integer}.
> Is there a correct way to represent those?  Should Irrational{sym} be
> something like Irrational{::Symbol}?
> Is this a bug in subtypes?
>
> Thanks, Scott


Re: [julia-users] Re: Optimizing Function Injection in Julia

2016-01-21 Thread Mauro
> Guys, it's killing me having to wait hours until my posts are approved.

Once you get fully approved by the moderators, your post will go through
immediately (most of the time).

> (Ability to edit would be nice as well.. Although it looks like I can edit
> all of my posts save for the original.)

Many of us read the posts as emails and we don't get updates on edits.
Thus better to repost.