[julia-users] ASTs of complete modules/packages?

2016-09-23 Thread Andreas Lobinger
Hello colleagues,

i'm playing around with some ideas for testing and i searching for 
something like this:

* read-in (from the original file or via name) a complete module (or 
package)
* transform/parse to AST
* insert additional Expr/code in dedicated places (to count calls etc.)
* eval, so the module exists with the orginal name, but with more 
functionality

Any pointer to read?

Wishing a happy day,
  Andreas


[julia-users] Is interrupt() supposed to kill worker processes?

2016-09-23 Thread Kevin Keys
The documentation for interrupt() when called with no arguments does the 
equivalent on worker processes of Ctrl + C on the local process. But in my 
experience, Ctrl + C does not necessarily kill the Julia instance. In 
contrast, it does seems like interrupt() kills worker processes.

Tested on both Julia v0.4.3 and on Julia v0.5

addprocs(3)
interrupt() # segfaults, death, cries of woe...

Perhaps the relevant error message is this one (forgive the garbling from 
workers printing output in parallel...):

fatal: error thrown and no exception handler available.

fatal: error thrown and no exception handler available.

fatal: error thrown and no exception handler available.

*julia> **()InterruptExceptionInterruptException*

*(())*


KLK


[julia-users] Cookbook for Julia

2016-09-23 Thread Pranav Bhat
It can be frustrating to comb through docs, forums or Stack Overflow to 
find the code snippet you need, especially if you're trying out a language 
(or even programming) for the first time. I found the Perl Cookbook 
 extremely helpful 
when I used Perl for the first time. Instead of having to read 
documentation or go through a tutorial, I just copied and modified code 
from the Cookbook's snippets. This allowed me to get productive faster, and 
I ended up learning the language as well.

I'm creating a cookbook for Julia: 
https://github.com/pranavtbhat/JuliaCookbook. I hope to cover most of the 
problems covered in Perl Cookbook, and provide short examples instead of 
detailed text. 

Some of the topics I have in mind are:
1. Arrays
2. Tuples
3. Strings
4. DataFrames
5. ML
6. Plotting

If you feel that something could be done in a better way, or if you have a 
problem that you'd like to add to the cookbook, please submit a Issue/PR. 

Thanks!
Pranav.


[julia-users] Re: can't get pyjulia to work

2016-09-23 Thread Tim Wheeler
So a temporary workaround is:

export PYTHONPATH=$HOME/Desktop/pyjulia:$PYTHONPATH

This just adds the pyjulia repo to my PYTHONPATH.

On Friday, September 23, 2016 at 3:01:10 PM UTC-7, Tim Wheeler wrote:
>
> Another pyjulia call for help.
>
> I am using Julia 0.5 and Anaconda 2.3.0 with Python 2.7. I followed the 
> pycall 
> install instructions . Julia is on my 
> PATH.
> When I run python in the cloned pyjulia repo everything is fine.
> When I run python in a different directory I get the following error:
>
> >>> import julia
> >>> j = julia.Julia(debug=True)
> JULIA_HOME = /bin,  libjulia_path = 
> /bin/../lib/x86_64-linux-gnu/libjulia.so.0.5
> calling jl_init(/bin)
> seems to work...
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/home/tim/anaconda/lib/python2.7/site-packages/julia/core.py", 
> line 288, in __init__
> self.api.jl_bytestring_ptr.restype = char_p
>   File "/home/tim/anaconda/lib/python2.7/ctypes/__init__.py", line 375, in 
> __getattr__
> func = self.__getitem__(name)
>   File "/home/tim/anaconda/lib/python2.7/ctypes/__init__.py", line 380, in 
> __getitem__
> func = self._FuncPtr((name_or_ordinal, self))
> AttributeError: /bin/../lib/x86_64-linux-gnu/libjulia.so.0.5: undefined 
> symbol: jl_bytestring_ptr
>
> Is this a path issue? Do I need to add the cloned pyjulia repo to my path 
> or something?
>
> Thank you.
>
>

Re: [julia-users] Re: How does promote_op work?

2016-09-23 Thread Pablo Zubieta
Sheehan, empty arrays are a problem in general, for now I'd say its better 
to handle the empty case separately for functions. Note that the only 
problematic case should be broadcasting with non-type stable functions over 
empty arrays. Unless that is extremely important for you, you might 
consider overriding promote_op, but again I'd say it is better handling the 
empty case on its own.

On Saturday, September 24, 2016 at 12:18:10 AM UTC+2, Sheehan Olver wrote:
>
> An empty array triggered a bug caused by not dispatching correctly, which 
> I worked around with isempty.  But I could have also overriden promote_op 
> and not had to deal with empty arrays as a special case.
>
>
> On 24 Sep. 2016, at 8:15 am, Pablo Zubieta  > wrote:
>
> Sheehan, are you planning on doing a lot of operations with empty arrays? 
> On 0.5 with your example
>
> julia> [Foo(1), Foo(2)] + [Foo(1), Foo(0)]
> 2-element Array{Foo,1}:
>  Foo{Float64}(1.0)
>  Foo{Int64}(1)
>
> The problem is empty arrays, when the type cannot be inferred broadcast 
> uses the types of each element to build the array. When there are no 
> elements it doesn't know what type to choose.
>
> On Friday, September 23, 2016 at 11:47:11 PM UTC+2, Sheehan Olver wrote:
>>
>> OK, here's a better example of the issue: in the following code I would 
>> want it to return an *Array(Foo,0)*, not an *Array(Any,0).  *Is this 
>> possible without overriding promote_op?
>>
>> *julia> **immutable Foo{T}*
>>
>>*x::T*
>>
>>*end*
>>
>>
>> *julia> **import Base.+*
>>
>>
>> *julia> **+(a::Foo,b::Foo) = (a.x==b.x? Foo(1.0) : Foo(1))*
>>
>> *+ (generic function with 164 methods)*
>>
>>
>> *julia> **Array(Foo{Float64},0)+Array(Foo{Float64},0)*
>>
>> *0-element Array{Any,1}*
>>
>>
>>
>>
>>
>> On Friday, September 23, 2016 at 10:54:03 PM UTC+10, Pablo Zubieta wrote:
>>>
>>> In julia 0.5 the following should work without needing doing anything to 
>>> promote_op
>>>
>>> import Base.+
>>> immutable Foo end
>>> +(a::Foo, b::Foo) =1.0
>>> Array{Foo}(0) + Array{Foo}(0))
>>>
>>> promote_op is supposed to be an internal method that you wouldn't need 
>>> to override. If it is not working i because the operation you are doing is 
>>> most likely not type stable. So instead of specializing it you could try to 
>>> remove any type instabilities in the method definitions over your types.
>>>
>>> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:


 The subject says it all: it looks like one can override promote_op to 
 support the following behaviour:

 *julia> **import Base.+*


 *julia> **immutable Foo end*

 WARNING: Method definition (::Type{Main.Foo})() in module Main at 
 REPL[5]:1 overwritten at REPL[10]:1.


 *julia> **+(a::Foo,b::Foo) = 1.0*

 *+ (generic function with 164 methods)*


 *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = 
 Float64*


 *julia> **Array(Foo,0) + Array(Foo,0)*

 *0-element Array{Float64,1}*


 Is this documented somewhere?  What if we want to override /, -, etc., 
 is the solution to write a promote_op for each case?

>>>
>

[julia-users] Re: How does promote_op work?

2016-09-23 Thread Pablo Zubieta
Andrew, I do not understand the details but I believe there are some 
restrictions when using generated functions. You are not supposed to use 
functions with side effects, closures, comprehensions and functions like 
promote_op which rely on inference confuse inference when in generated 
functions.

On Friday, September 23, 2016 at 10:16:17 PM UTC+2, Andrew Keller wrote:
>
> Does the promote_op mechanism in v0.5 play nicely with generated 
> functions? In Unitful.jl, I use a generated function to determine result 
> units after computations involving quantities with units. I seem to get 
> errors (@inferred tests fail) if I remove my promote_op specialization. 
> Perhaps my problems are all a consequence of 
> https://github.com/JuliaLang/julia/issues/18465 and they will go away 
> soon...?
>
> On Friday, September 23, 2016 at 5:54:03 AM UTC-7, Pablo Zubieta wrote:
>>
>> In julia 0.5 the following should work without needing doing anything to 
>> promote_op
>>
>> import Base.+
>> immutable Foo end
>> +(a::Foo, b::Foo) =1.0
>> Array{Foo}(0) + Array{Foo}(0))
>>
>> promote_op is supposed to be an internal method that you wouldn't need to 
>> override. If it is not working i because the operation you are doing is 
>> most likely not type stable. So instead of specializing it you could try to 
>> remove any type instabilities in the method definitions over your types.
>>
>> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>>>
>>>
>>> The subject says it all: it looks like one can override promote_op to 
>>> support the following behaviour:
>>>
>>> *julia> **import Base.+*
>>>
>>>
>>> *julia> **immutable Foo end*
>>>
>>> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
>>> REPL[5]:1 overwritten at REPL[10]:1.
>>>
>>>
>>> *julia> **+(a::Foo,b::Foo) = 1.0*
>>>
>>> *+ (generic function with 164 methods)*
>>>
>>>
>>> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = 
>>> Float64*
>>>
>>>
>>> *julia> **Array(Foo,0) + Array(Foo,0)*
>>>
>>> *0-element Array{Float64,1}*
>>>
>>>
>>> Is this documented somewhere?  What if we want to override /, -, etc., 
>>> is the solution to write a promote_op for each case?
>>>
>>

Re: [julia-users] Re: How does promote_op work?

2016-09-23 Thread Sheehan Olver
An empty array triggered a bug caused by not dispatching correctly, which I 
worked around with isempty.  But I could have also overriden promote_op and not 
had to deal with empty arrays as a special case.


> On 24 Sep. 2016, at 8:15 am, Pablo Zubieta  wrote:
> 
> Sheehan, are you planning on doing a lot of operations with empty arrays? On 
> 0.5 with your example
> 
> julia> [Foo(1), Foo(2)] + [Foo(1), Foo(0)]
> 2-element Array{Foo,1}:
>  Foo{Float64}(1.0)
>  Foo{Int64}(1)
> 
> The problem is empty arrays, when the type cannot be inferred broadcast uses 
> the types of each element to build the array. When there are no elements it 
> doesn't know what type to choose.
> 
> On Friday, September 23, 2016 at 11:47:11 PM UTC+2, Sheehan Olver wrote:
> OK, here's a better example of the issue: in the following code I would want 
> it to return an Array(Foo,0), not an Array(Any,0).  Is this possible without 
> overriding promote_op?
> 
> julia> immutable Foo{T}
> 
>x::T
> 
>end
> 
> 
> 
> julia> import Base.+
> 
> 
> 
> julia> +(a::Foo,b::Foo) = (a.x==b.x? Foo(1.0) : Foo(1))
> 
> + (generic function with 164 methods)
> 
> 
> 
> julia> Array(Foo{Float64},0)+Array(Foo{Float64},0)
> 
> 0-element Array{Any,1}
> 
> 
> 
> 
> 
> 
> 
> 
> On Friday, September 23, 2016 at 10:54:03 PM UTC+10, Pablo Zubieta wrote:
> In julia 0.5 the following should work without needing doing anything to 
> promote_op
> 
> import Base.+
> immutable Foo end
> +(a::Foo, b::Foo) =1.0
> Array{Foo}(0) + Array{Foo}(0))
> 
> promote_op is supposed to be an internal method that you wouldn't need to 
> override. If it is not working i because the operation you are doing is most 
> likely not type stable. So instead of specializing it you could try to remove 
> any type instabilities in the method definitions over your types.
> 
> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
> 
> The subject says it all: it looks like one can override promote_op to support 
> the following behaviour:
> 
> julia> import Base.+
> 
> 
> 
> julia> immutable Foo end
> 
> WARNING: Method definition (::Type{Main.Foo})() in module Main at REPL[5]:1 
> overwritten at REPL[10]:1.
> 
> 
> 
> julia> +(a::Foo,b::Foo) = 1.0
> 
> + (generic function with 164 methods)
> 
> 
> 
> julia> Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = Float64
> 
> 
> 
> julia> Array(Foo,0) + Array(Foo,0)
> 
> 0-element Array{Float64,1}
> 
> 
> 
> Is this documented somewhere?  What if we want to override /, -, etc., is the 
> solution to write a promote_op for each case?



[julia-users] Re: How does promote_op work?

2016-09-23 Thread Pablo Zubieta
Sheehan, are you planning on doing a lot of operations with empty arrays? 
On 0.5 with your example

julia> [Foo(1), Foo(2)] + [Foo(1), Foo(0)]
2-element Array{Foo,1}:
 Foo{Float64}(1.0)
 Foo{Int64}(1)

The problem is empty arrays, when the type cannot be inferred broadcast 
uses the types of each element to build the array. When there are no 
elements it doesn't know what type to choose.

On Friday, September 23, 2016 at 11:47:11 PM UTC+2, Sheehan Olver wrote:
>
> OK, here's a better example of the issue: in the following code I would 
> want it to return an *Array(Foo,0)*, not an *Array(Any,0).  *Is this 
> possible without overriding promote_op?
>
> *julia> **immutable Foo{T}*
>
>*x::T*
>
>*end*
>
>
> *julia> **import Base.+*
>
>
> *julia> **+(a::Foo,b::Foo) = (a.x==b.x? Foo(1.0) : Foo(1))*
>
> *+ (generic function with 164 methods)*
>
>
> *julia> **Array(Foo{Float64},0)+Array(Foo{Float64},0)*
>
> *0-element Array{Any,1}*
>
>
>
>
>
> On Friday, September 23, 2016 at 10:54:03 PM UTC+10, Pablo Zubieta wrote:
>>
>> In julia 0.5 the following should work without needing doing anything to 
>> promote_op
>>
>> import Base.+
>> immutable Foo end
>> +(a::Foo, b::Foo) =1.0
>> Array{Foo}(0) + Array{Foo}(0))
>>
>> promote_op is supposed to be an internal method that you wouldn't need to 
>> override. If it is not working i because the operation you are doing is 
>> most likely not type stable. So instead of specializing it you could try to 
>> remove any type instabilities in the method definitions over your types.
>>
>> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>>>
>>>
>>> The subject says it all: it looks like one can override promote_op to 
>>> support the following behaviour:
>>>
>>> *julia> **import Base.+*
>>>
>>>
>>> *julia> **immutable Foo end*
>>>
>>> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
>>> REPL[5]:1 overwritten at REPL[10]:1.
>>>
>>>
>>> *julia> **+(a::Foo,b::Foo) = 1.0*
>>>
>>> *+ (generic function with 164 methods)*
>>>
>>>
>>> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = 
>>> Float64*
>>>
>>>
>>> *julia> **Array(Foo,0) + Array(Foo,0)*
>>>
>>> *0-element Array{Float64,1}*
>>>
>>>
>>> Is this documented somewhere?  What if we want to override /, -, etc., 
>>> is the solution to write a promote_op for each case?
>>>
>>

[julia-users] Re: can't get pyjulia to work

2016-09-23 Thread Tim Wheeler
Another pyjulia call for help.

I am using Julia 0.5 and Anaconda 2.3.0 with Python 2.7. I followed the pycall 
install instructions . Julia is on my 
PATH.
When I run python in the cloned pyjulia repo everything is fine.
When I run python in a different directory I get the following error:

>>> import julia
>>> j = julia.Julia(debug=True)
JULIA_HOME = /bin,  libjulia_path = 
/bin/../lib/x86_64-linux-gnu/libjulia.so.0.5
calling jl_init(/bin)
seems to work...
Traceback (most recent call last):
  File "", line 1, in 
  File "/home/tim/anaconda/lib/python2.7/site-packages/julia/core.py", line 
288, in __init__
self.api.jl_bytestring_ptr.restype = char_p
  File "/home/tim/anaconda/lib/python2.7/ctypes/__init__.py", line 375, in 
__getattr__
func = self.__getitem__(name)
  File "/home/tim/anaconda/lib/python2.7/ctypes/__init__.py", line 380, in 
__getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: /bin/../lib/x86_64-linux-gnu/libjulia.so.0.5: undefined 
symbol: jl_bytestring_ptr

Is this a path issue? Do I need to add the cloned pyjulia repo to my path 
or something?

Thank you.



[julia-users] Re: How does promote_op work?

2016-09-23 Thread Sheehan Olver
OK, here's a better example of the issue: in the following code I would 
want it to return an *Array(Foo,0)*, not an *Array(Any,0).  *Is this 
possible without overriding promote_op?

*julia> **immutable Foo{T}*

   *x::T*

   *end*


*julia> **import Base.+*


*julia> **+(a::Foo,b::Foo) = (a.x==b.x? Foo(1.0) : Foo(1))*

*+ (generic function with 164 methods)*


*julia> **Array(Foo{Float64},0)+Array(Foo{Float64},0)*

*0-element Array{Any,1}*





On Friday, September 23, 2016 at 10:54:03 PM UTC+10, Pablo Zubieta wrote:
>
> In julia 0.5 the following should work without needing doing anything to 
> promote_op
>
> import Base.+
> immutable Foo end
> +(a::Foo, b::Foo) =1.0
> Array{Foo}(0) + Array{Foo}(0))
>
> promote_op is supposed to be an internal method that you wouldn't need to 
> override. If it is not working i because the operation you are doing is 
> most likely not type stable. So instead of specializing it you could try to 
> remove any type instabilities in the method definitions over your types.
>
> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>>
>>
>> The subject says it all: it looks like one can override promote_op to 
>> support the following behaviour:
>>
>> *julia> **import Base.+*
>>
>>
>> *julia> **immutable Foo end*
>>
>> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
>> REPL[5]:1 overwritten at REPL[10]:1.
>>
>>
>> *julia> **+(a::Foo,b::Foo) = 1.0*
>>
>> *+ (generic function with 164 methods)*
>>
>>
>> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = Float64*
>>
>>
>> *julia> **Array(Foo,0) + Array(Foo,0)*
>>
>> *0-element Array{Float64,1}*
>>
>>
>> Is this documented somewhere?  What if we want to override /, -, etc., is 
>> the solution to write a promote_op for each case?
>>
>

[julia-users] Re: How does promote_op work?

2016-09-23 Thread Andrew Keller
Does the promote_op mechanism in v0.5 play nicely with generated functions? 
In Unitful.jl, I use a generated function to determine result units after 
computations involving quantities with units. I seem to get errors 
(@inferred tests fail) if I remove my promote_op specialization. Perhaps my 
problems are all a consequence 
of https://github.com/JuliaLang/julia/issues/18465 and they will go away 
soon...?

On Friday, September 23, 2016 at 5:54:03 AM UTC-7, Pablo Zubieta wrote:
>
> In julia 0.5 the following should work without needing doing anything to 
> promote_op
>
> import Base.+
> immutable Foo end
> +(a::Foo, b::Foo) =1.0
> Array{Foo}(0) + Array{Foo}(0))
>
> promote_op is supposed to be an internal method that you wouldn't need to 
> override. If it is not working i because the operation you are doing is 
> most likely not type stable. So instead of specializing it you could try to 
> remove any type instabilities in the method definitions over your types.
>
> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>>
>>
>> The subject says it all: it looks like one can override promote_op to 
>> support the following behaviour:
>>
>> *julia> **import Base.+*
>>
>>
>> *julia> **immutable Foo end*
>>
>> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
>> REPL[5]:1 overwritten at REPL[10]:1.
>>
>>
>> *julia> **+(a::Foo,b::Foo) = 1.0*
>>
>> *+ (generic function with 164 methods)*
>>
>>
>> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = Float64*
>>
>>
>> *julia> **Array(Foo,0) + Array(Foo,0)*
>>
>> *0-element Array{Float64,1}*
>>
>>
>> Is this documented somewhere?  What if we want to override /, -, etc., is 
>> the solution to write a promote_op for each case?
>>
>

Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-23 Thread Erik Schnetter
It should. Yes, please open an issue.

-erik

On Thu, Sep 22, 2016 at 7:46 PM, Chris Rackauckas 
wrote:

> So, in the end, is `@fastmath` supposed to be adding FMA? Should I open an
> issue?
>
> On Wednesday, September 21, 2016 at 7:11:14 PM UTC-7, Yichao Yu wrote:
>>
>> On Wed, Sep 21, 2016 at 9:49 PM, Erik Schnetter 
>> wrote:
>> > I confirm that I can't get Julia to synthesize a `vfmadd` instruction
>> > either... Sorry for sending you on a wild goose chase.
>>
>> -march=haswell does the trick for C (both clang and gcc)
>> the necessary bit for the machine ir optimization (this is not a llvm
>> ir optimization pass) to do this is llc options -mcpu=haswell and
>> function attribute unsafe-fp-math=true.
>>
>> >
>> > -erik
>> >
>> > On Wed, Sep 21, 2016 at 9:33 PM, Yichao Yu  wrote:
>> >>
>> >> On Wed, Sep 21, 2016 at 9:29 PM, Erik Schnetter 
>> >> wrote:
>> >> > On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas 
>>
>> >> > wrote:
>> >> >>
>> >> >> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg
>> and
>> >> >> now
>> >> >> I get results where g and h apply muladd/fma in the native code,
>> but a
>> >> >> new
>> >> >> function k which is `@fastmath` inside of f does not apply
>> muladd/fma.
>> >> >>
>> >> >>
>> >> >> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f39
>> 22c673a25910
>> >> >>
>> >> >> Should I open an issue?
>> >> >
>> >> >
>> >> > In your case, LLVM apparently thinks that `x + x + 3` is faster to
>> >> > calculate
>> >> > than `2x+3`. If you use a less round number than `2` multiplying
>> `x`,
>> >> > you
>> >> > might see a different behaviour.
>> >>
>> >> I've personally never seen llvm create fma from mul and add. We might
>> >> not have the llvm passes enabled if LLVM is capable of doing this at
>> >> all.
>> >>
>> >> >
>> >> > -erik
>> >> >
>> >> >
>> >> >> Note that this is on v0.6 Windows. On Linux the sysimg isn't
>> rebuilding
>> >> >> for some reason, so I may need to just build from source.
>> >> >>
>> >> >> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik
>> Schnetter
>> >> >> wrote:
>> >> >>>
>> >> >>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas <
>> rack...@gmail.com>
>> >> >>> wrote:
>> >> 
>> >>  Hi,
>> >>    First of all, does LLVM essentially fma or muladd expressions
>> like
>> >>  `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one
>> >>  explicitly use
>> >>  `muladd` and `fma` on these types of instructions (is there a
>> macro
>> >>  for
>> >>  making this easier)?
>> >> >>>
>> >> >>>
>> >> >>> Yes, LLVM will use fma machine instructions -- but only if they
>> lead
>> >> >>> to
>> >> >>> the same round-off error as using separate multiply and add
>> >> >>> instructions. If
>> >> >>> you do not care about the details of conforming to the IEEE
>> standard,
>> >> >>> then
>> >> >>> you can use the `@fastmath` macro that enables several
>> optimizations,
>> >> >>> including this one. This is described in the manual
>> >> >>>
>> >> >>> > -tips/#performance-annotations>.
>> >> >>>
>> >> >>>
>> >>    Secondly, I am wondering if my setup is no applying these
>> >>  operations
>> >>  correctly. Here's my test code:
>> >> 
>> >>  f(x) = 2.0x + 3.0
>> >>  g(x) = muladd(x,2.0, 3.0)
>> >>  h(x) = fma(x,2.0, 3.0)
>> >> 
>> >>  @code_llvm f(4.0)
>> >>  @code_llvm g(4.0)
>> >>  @code_llvm h(4.0)
>> >> 
>> >>  @code_native f(4.0)
>> >>  @code_native g(4.0)
>> >>  @code_native h(4.0)
>> >> 
>> >>  Computer 1
>> >> 
>> >>  Julia Version 0.5.0-rc4+0
>> >>  Commit 9c76c3e* (2016-09-09 01:43 UTC)
>> >>  Platform Info:
>> >>    System: Linux (x86_64-redhat-linux)
>> >>    CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz
>> >>    WORD_SIZE: 64
>> >>    BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell)
>> >>    LAPACK: libopenblasp.so.0
>> >>    LIBM: libopenlibm
>> >>    LLVM: libLLVM-3.7.1 (ORCJIT, broadwell)
>> >> >>>
>> >> >>>
>> >> >>> This looks good, the "broadwell" architecture that LLVM uses
>> should
>> >> >>> imply
>> >> >>> the respective optimizations. Try with `@fastmath`.
>> >> >>>
>> >> >>> -erik
>> >> >>>
>> >> >>>
>> >> >>>
>> >> >>>
>> >> 
>> >>  (the COPR nightly on CentOS7) with
>> >> 
>> >>  [crackauc@crackauc2 ~]$ lscpu
>> >>  Architecture:  x86_64
>> >>  CPU op-mode(s):32-bit, 64-bit
>> >>  Byte Order:Little Endian
>> >>  CPU(s):16
>> >>  On-line CPU(s) list:   0-15
>> >>  Thread(s) per core:1
>> >>  Core(s) per socket:8
>> >>  Socket(s): 2
>> >>  NUMA node(s):  2
>> >>  Vendor ID: GenuineIntel
>> >>  CPU family:6
>> >>  Model: 79
>> >>  Model 

[julia-users] Re: Problems with Memoize in Julia 0.5.0

2016-09-23 Thread Steven G. Johnson


On Friday, September 23, 2016 at 9:08:29 AM UTC-4, Ed Scheinerman wrote:
>
> Hello,
>
> I use memoization frequently and have run into two problems with the move 
> to Julia 0.5.0. 
>
> The first is not too serious and I hope can be fixed readily.  The first 
> time I memoize a function, a warning is generated like this:
>

This kind of warning is fairly innocuous and happens when we upgrade Julia 
versions, due to language changes; it takes a while for packages to catch 
up with a new release.

In this case, it looks like it was already fixed 
(https://github.com/simonster/Memoize.jl/pull/7), but maybe a new version 
needs to be tagged.

 

> More significantly, if I want multiple dispatch on a function name, the 
> second instance creates a problem and the definition is rejected. Here I 
> define a factorial function that always returns a BigInt. The first 
> function definition succeeds but the second one fails:
>

I would file an issue with Memoize.jl; I'm not sure why the package is 
limited to memoizing only a single method definition, but this seems like 
it would need to be fixed upstream.

(Note, by the way, that you could define @memoize Factorial(n::Integer) = 
factorial(big(n)), and it will be far more efficient because it will call 
an optimized GMP factorial function.)


Re: [julia-users] Re: ANN: A potential new Discourse-based Julia forum

2016-09-23 Thread daycaster
... @juliaheartbeat on twitter also started silent running on Sept 17...

Re: [julia-users] trivial but stubborn question on conversion to unix time and Float64

2016-09-23 Thread Yichao Yu
On Fri, Sep 23, 2016 at 11:50 AM, Yichao Yu  wrote:
> On Fri, Sep 23, 2016 at 11:33 AM, Reuben Brooks  wrote:
>> Trying to convert a datetime object to unixtime to a string for an api that
>> uses unixtime. However, I cannot figure out how to get the result formatted
>> in non-scientific notation:
>>
>> julia> a = Dates.datetime2unix(now()); string(round(a))
>> "1.474626735e9"
>>
>>
>> What I want is to get:
>>
>> julia> str = string(round(a))
>> "1474626535"
>>
>> I cannot figure out how to specify the format for the Float64.
>
> round(Int, a)

or `round(Int64, a)` if you want to avoid time overflow on 32bit in a few years.

>
>>
>>
>>


Re: [julia-users] trivial but stubborn question on conversion to unix time and Float64

2016-09-23 Thread Yichao Yu
On Fri, Sep 23, 2016 at 11:33 AM, Reuben Brooks  wrote:
> Trying to convert a datetime object to unixtime to a string for an api that
> uses unixtime. However, I cannot figure out how to get the result formatted
> in non-scientific notation:
>
> julia> a = Dates.datetime2unix(now()); string(round(a))
> "1.474626735e9"
>
>
> What I want is to get:
>
> julia> str = string(round(a))
> "1474626535"
>
> I cannot figure out how to specify the format for the Float64.

round(Int, a)

>
>
>


[julia-users] trivial but stubborn question on conversion to unix time and Float64

2016-09-23 Thread Reuben Brooks
Trying to convert a datetime object to unixtime to a string for an api that 
uses unixtime. However, I cannot figure out how to get the result formatted 
in non-scientific notation:

julia> a = Dates.datetime2unix(now()); string(round(a))
"1.474626735e9"


What I want is to get:

julia> str = string(round(a))
"1474626535"

I cannot figure out how to specify the format for the Float64.





[julia-users] Re: Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Cristóvão Duarte Sousa
On the other hand, it works ok for the mean function:

r = rand(10)
test(r) = mean( t^2 for t in r )
@code_warntype test(r)   # return type Float64 is inferred

On Thursday, September 22, 2016 at 7:21:36 PM UTC+1, Christoph Ortner wrote:
>
> I hope that there is something I am missing, or making a mistake in the 
> following example: 
>
> r = rand(10)
> test1(r) = sum( t^2 for t in r )
> test2(r)= sum( [t^2 for t in r] )
> @code_warntype test1(r)   # return type Any is inferred
> @code_warntype test2(r)   # return type Float64 is inferred
>
>
> This caused a problem for me, beyond execution speed: I used a generator 
> to create the elements for a comprehension, since the type was not inferred 
> the zero-element could not be created.
>
> Is this a known issue?
>


[julia-users] Problems with Memoize in Julia 0.5.0

2016-09-23 Thread Ed Scheinerman
Hello,

I use memoization frequently and have run into two problems with the move 
to Julia 0.5.0. 

The first is not too serious and I hope can be fixed readily.  The first 
time I memoize a function, a warning is generated like this:

julia> using Memoize

julia> @memoize f(a) = a+1
WARNING: symbol is deprecated, use Symbol instead.
 in depwarn(::String, ::Symbol) at ./deprecated.jl:64
 in symbol(::String, ::Vararg{String,N}) at ./deprecated.jl:30
 in @memoize(::Expr, ::Vararg{Expr,N}) at 
/Users/ers/.julia/v0.5/Memoize/src/Memoize.jl:14
 in eval(::Module, ::Any) at ./boot.jl:234
 in eval(::Module, ::Any) at 
/Applications/Julia-0.5.app/Contents/Resources/julia/lib/julia/sys.dylib:?
 in eval_user_input(::Any, ::Base.REPL.REPLBackend) at ./REPL.jl:64
 in macro expansion at ./REPL.jl:95 [inlined]
 in (::Base.REPL.##3#4{Base.REPL.REPLBackend})() at ./event.jl:68
while loading no file, in expression starting on line 0
(::#28#f) (generic function with 1 method)


More significantly, if I want multiple dispatch on a function name, the 
second instance creates a problem and the definition is rejected. Here I 
define a factorial function that always returns a BigInt. The first 
function definition succeeds but the second one fails:

julia> @memoize function Factorial(n::Integer)
 if n<0
   throw(DomainError())
 end
 if n==0 || n==1
   return big(1)
 end
 return n * Factorial(n-1)
   end
(::#35#Factorial) (generic function with 1 method)

julia> @memoize function Factorial(n::Integer,k::Integer)
 if n<0 || k<0 || k > n
   throw(DomainError())
 end

 if k==n
   return big(1)
 end

 return n * Factorial(n-1,k)
   end
ERROR: cannot define function Factorial; it already has a value
 in macro expansion; at /Users/ers/.julia/v0.5/Memoize/src/Memoize.jl:103 
[inlined]
 in anonymous at ./:?

julia> Factorial(10)  # this works
3628800

julia> Factorial(10,2) # but this doesn't
ERROR: MethodError: no method matching (::##35#Factorial#2)(::Int64, 
::Int64)
Closest candidates are:
  #35#Factorial(::Integer) at 
/Users/ers/.julia/v0.5/Memoize/src/Memoize.jl:113




Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Patrick Kofod Mogensen
I still don't quite get why a) inference between the generator and the 
comprehension is different, and b) why inference went down the drain when I 
added the type annotation for the return value in my example above... Sorry 
if the answer is in this discussion somewhere!

On Friday, September 23, 2016 at 1:16:33 PM UTC+2, Steven G. Johnson wrote:
>
>
>
> On Friday, September 23, 2016 at 4:13:53 AM UTC-4, Christoph Ortner wrote:
>>
>> The sum of an empty set or vector is undefined it is not zero.
>>
>>> you can rewrite it in a more explicit way
>>>


>> Actually a sum over an empty set is normally defined to be zero while a 
>> product over an empty set is normally defined to be one
>>
>
> More precisely, the empty sum and product are the additive and 
> multiplicative identities, respectively.
>
> However, if you are summing something whose element type is unknown (Any), 
> then the additive identity is also unknown (zero(Any) is undefined), and so 
> sum(Any[]) is undefined (and throws an error).
>


[julia-users] Re: How does promote_op work?

2016-09-23 Thread Pablo Zubieta
In julia 0.5 the following should work without needing doing anything to 
promote_op

import Base.+
immutable Foo end
+(a::Foo, b::Foo) =1.0
Array{Foo}(0) + Array{Foo}(0))

promote_op is supposed to be an internal method that you wouldn't need to 
override. If it is not working i because the operation you are doing is 
most likely not type stable. So instead of specializing it you could try to 
remove any type instabilities in the method definitions over your types.

On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>
>
> The subject says it all: it looks like one can override promote_op to 
> support the following behaviour:
>
> *julia> **import Base.+*
>
>
> *julia> **immutable Foo end*
>
> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
> REPL[5]:1 overwritten at REPL[10]:1.
>
>
> *julia> **+(a::Foo,b::Foo) = 1.0*
>
> *+ (generic function with 164 methods)*
>
>
> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = Float64*
>
>
> *julia> **Array(Foo,0) + Array(Foo,0)*
>
> *0-element Array{Float64,1}*
>
>
> Is this documented somewhere?  What if we want to override /, -, etc., is 
> the solution to write a promote_op for each case?
>


Re: [julia-users] Re: ANN: A potential new Discourse-based Julia forum

2016-09-23 Thread Patrick Kofod Mogensen
And worst of all, no Julia-speedometer at 
http://nirajkadu.me/index.php/about/ either!

On Thursday, September 22, 2016 at 9:23:39 PM UTC+2, Stefan Karpinski wrote:
>
> Yikes... recycled static IP address :|
>
> On Thu, Sep 22, 2016 at 1:02 PM, mmh  
> wrote:
>
>> http://julia.malmaud.com 
>> 
>>
>> Now links to some random dudes website :P
>>
>> On Monday, September 19, 2016 at 3:39:34 PM UTC-4, Jonathan Malmaud wrote:
>>>
>>> Discourse lives! 
>>> On Mon, Sep 19, 2016 at 3:01 PM Stefan Karpinski  
>>> wrote:
>>>
 I got the go ahead from Jeff and Viral to give this a try, then it 
 didn't end up panning out. It would still be worth a try, imo.

 On Sat, Sep 17, 2016 at 11:55 AM, mmh  wrote:

> Hi Jonathan,
>
> Seems like this has kind of burnt out. Is there still an impetus on a 
> transition. 
>
> On Saturday, September 19, 2015 at 8:16:36 PM UTC-4, Jonathan Malmaud 
> wrote:
>
>> Hi all,
>> There's been some chatter about maybe switching to a new, more modern 
>> forum platform for Julia that could potentially subsume julia-users, 
>> julia-dev, julia-stats, julia-gpu, and julia-jobs.   I created 
>> http://julia.malmaud.com for us to try one out and see if we like 
>> it. Please check it out and leave feedback. All the old posts from 
>> julia-users have already been imported to it.
>>
>> It is using Discourse , the same 
>> forum software used for the forums of Rust 
>> , BoingBoing, and some other big sites. 
>> Benefits over Google Groups include better support for topic tagging, 
>> community moderation features,  Markdown (and hence syntax highlighting) 
>> in 
>> messages, inline previews of linked-to Github issues, better mobile 
>> support, and more options for controlling when and what you get emailed. 
>> The 
>> Discourse website  does a better job 
>> of summarizing the advantages than I could.
>>
>> To get things started, MIke Innes suggested having a topic on what 
>> we plan on working on this coming wee 
>> k.
>>  
>> I think that's a great idea.
>>
>> Just to be clear, this isn't "official" in any sense - it's just to 
>> kickstart the discussion. 
>>
>> -Jon
>>
>>
>>

>

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Steven G. Johnson


On Friday, September 23, 2016 at 4:13:53 AM UTC-4, Christoph Ortner wrote:
>
> The sum of an empty set or vector is undefined it is not zero.
>
>> you can rewrite it in a more explicit way
>>
>>>
>>>
> Actually a sum over an empty set is normally defined to be zero while a 
> product over an empty set is normally defined to be one
>

More precisely, the empty sum and product are the additive and 
multiplicative identities, respectively.

However, if you are summing something whose element type is unknown (Any), 
then the additive identity is also unknown (zero(Any) is undefined), and so 
sum(Any[]) is undefined (and throws an error).


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Steven G. Johnson


On Friday, September 23, 2016 at 2:42:00 AM UTC-4, Michele Zaffalon wrote:
>
> On Fri, Sep 23, 2016 at 2:23 AM, Steven G. Johnson  > wrote:
>>
>>
>> We could use type inference on the function t -> t^2 (which is buried in 
>> the generator) to determine a more specific eltype. 
>>
>
> Does this not require evaluating the function on all inputs thereby losing 
> the advantage of having a generator? 
>

No, not if the eltype of the thing the generator iterates over (in this 
case, an Array{Float64}) is known.


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Tsur Herman

{We could use type inference on the function t -> t^2 (which is buried in 
the generator) to determine a more specific eltype.}

We can declare :

eltype(G::Base.Generator) = 
Base.code_typed(G.f,(eltype(G.iter),))[1].rettype

element type of Generator G will be the inferred return type of G.f with 
arguments of type eltype(G.iter)

And more generally the element type of function F with arguments args can 
be set to
Base.code_typed(F,args)[1].rettype 




On Friday, September 23, 2016 at 11:14:35 AM UTC+3, Christoph Ortner wrote:
>
> why would type inference for sum(t^2 for t in r) be different from [t^2 
> for t in r] ?
>
> On Friday, 23 September 2016 07:42:00 UTC+1, Michele Zaffalon wrote:
>>
>> On Fri, Sep 23, 2016 at 2:23 AM, Steven G. Johnson  
>> wrote:
>>>
>>>
>>> We could use type inference on the function t -> t^2 (which is buried in 
>>> the generator) to determine a more specific eltype. 
>>>
>>
>> Does this not require evaluating the function on all inputs thereby 
>> losing the advantage of having a generator? 
>>
>>

[julia-users] Re: Julia v0.5.0 always killed on a memory limited cluster

2016-09-23 Thread Florian Oswald
This is affecting my work in exactly the same way. my jobs have been down 
on an SGE managed cluster since I upgraded to v0.5-xxx for other reasons. 
Some support on this would be very good.

On Thursday, 22 September 2016 11:26:16 UTC+2, Alan Crawford wrote:
>
> I am using Julia v0.5.0 on a memory limited SGE cluster. In particular, 
>  to submit jobs on the cluster, both h_vmem and tmem resource flags need to 
> be passed to in qsub command. However, all of Julia jobs keep on being 
> killed because workers seem to be very hungry for virtual memory and ask 
> for it outside of the SGE scheduler. 
>
> A more detailed description of the problem can be found in the last post 
> in this thread 
> . 
> It seems that the best workaround (listed on this thread 
> ) is to change  #define 
> REGION_PG_COUNT 16*8*4096 to #define REGION_PG_COUNT 8*4096 in 
> src/gc-pages.c when compiling Julia. 
>
> However, I am just a user of the cluster and don't have permission to 
> compile Julia on it. Nor do i have access to machinery to compile my own 
> binary . As such I am wondering if there could be a more user friendly 
> option?
>
> Two ideas come to mind:
>
> 1) Offer a low virtual memory for Julia binaries; or
> 2) Add a flag when opening julia flag that lets you set K where K is 
> defined as REGION_PG_COUNT K*8*4096 ... or some other equivalent...
>
> In particular, option 2 would really be addition to the language and 
> greatly aid users - especially those without capacity to compile julia. If, 
> for some reason, this is undesirable, perhaps option 1 would present the 
> best short term fix?
>
>

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Michele Zaffalon
>From reading the manual
,
producing a value is delayed until that value is required. Is my
understanding wrong?

On Fri, Sep 23, 2016 at 10:14 AM, Christoph Ortner <
christophortn...@gmail.com> wrote:

> why would type inference for sum(t^2 for t in r) be different from [t^2
> for t in r] ?
>

> On Friday, 23 September 2016 07:42:00 UTC+1, Michele Zaffalon wrote:
>>
>> On Fri, Sep 23, 2016 at 2:23 AM, Steven G. Johnson 
>> wrote:
>>>
>>>
>>> We could use type inference on the function t -> t^2 (which is buried in
>>> the generator) to determine a more specific eltype.
>>>
>>
>> Does this not require evaluating the function on all inputs thereby
>> losing the advantage of having a generator?
>>
>>


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Christoph Ortner
why would type inference for sum(t^2 for t in r) be different from [t^2 for 
t in r] ?

On Friday, 23 September 2016 07:42:00 UTC+1, Michele Zaffalon wrote:
>
> On Fri, Sep 23, 2016 at 2:23 AM, Steven G. Johnson  > wrote:
>>
>>
>> We could use type inference on the function t -> t^2 (which is buried in 
>> the generator) to determine a more specific eltype. 
>>
>
> Does this not require evaluating the function on all inputs thereby losing 
> the advantage of having a generator? 
>
>

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Christoph Ortner
The sum of an empty set or vector is undefined it is not zero.

> you can rewrite it in a more explicit way
>
>>
>>
Actually a sum over an empty set is normally defined to be zero while a 
product over an empty set is normally defined to be one.
 


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Michele Zaffalon
On Fri, Sep 23, 2016 at 2:23 AM, Steven G. Johnson 
wrote:
>
>
> We could use type inference on the function t -> t^2 (which is buried in
> the generator) to determine a more specific eltype.
>

Does this not require evaluating the function on all inputs thereby losing
the advantage of having a generator?


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-23 Thread Milan Bouchet-Valat
Le jeudi 22 septembre 2016 à 14:54 -0700, Tsur Herman a écrit :
> By the way my test3 functions is super fast
> 
>  @time test3(r)
>   0.32 seconds (4 allocations: 160 bytes)
Beware, if you don't return 'total' from the function, LLVM optimizes
away the whole loops and turns the function into a no-op (have a look
at @code_llvm or @code_native).


Regards


> > 
> > On my side both function perform equally. although test2 had to be
> > timed twice to get to the same performance.
> > 
> > julia> test2(x)= sum( [t^2 for t in x] )
> > 
> > julia> @time test2(r)
> >   0.017423 seconds (13.22 k allocations: 1.339 MB)
> > 
> > julia> @time test2(r)
> >   0.000332 seconds (9 allocations: 781.531 KB)
> > 
> > I think the discrepancy  comes from the JITing process because if I
> > time it without using  the macro @time, it works from the first
> > run.
> > 
> > julia> test2(x)= sum( [t^2 for t in x] )
> > WARNING: Method definition test2(Any) in module Main at REPL[68]:1
> > overwritten at REPL[71]:1.
> > test2 (generic function with 1 method)
> > 
> > julia> tic();for i=1:1 ; test2(r);end;toc()/1
> > elapsed time: 3.090764498 seconds
> > 0.0003090764498
> > 
> > About the memory footprint -> test2 first constructs the inner
> > vector then calls sum.
> > 
> > > since the type was not inferred the zero-element could not be
> > > created.
> > The sum of an empty set or vector is undefined it is not zero.
> > you can rewrite it in a more explicit way
> > 
> > test3(r) = begin 
> >     total = Float64(0);
> >  for t in r total+=t ;end;end
> > 
> > 
> > 
> > 
> > 
> > 
> > > I've seen the same, and the answer I got at the JuliaLang gitter
> > > channel was that it could not be inferred because r could be of
> > > length 0, and in that case, the return type could not be
> > > inferred. My Julia-fu is too weak to then explain why the
> > > comprehension would be able to infer the return type.
> > > 
> > > > I see the same, yet:
> > > > 
> > > > julia> r = rand(10^5);
> > > > 
> > > > julia> @time test1(r)
> > > >   0.000246 seconds (7 allocations: 208 bytes)
> > > > 33375.54531253989
> > > > 
> > > > julia> @time test2(r)
> > > >   0.001029 seconds (7 allocations: 781.500 KB)
> > > > 33375.54531253966
> > > > 
> > > > So test1 is efficient, despite the codewarn output. Not sure
> > > > what's up.
> > > > 
> > > > On Thu, Sep 22, 2016 at 2:21 PM, Christoph Ortner  > > > gmail.com> wrote:
> > > > > I hope that there is something I am missing, or making a
> > > > > mistake in the following example: 
> > > > > 
> > > > > r = rand(10)
> > > > > test1(r) = sum( t^2 for t in r )
> > > > > test2(r)= sum( [t^2 for t in r] )
> > > > > @code_warntype test1(r)   # return type Any is inferred
> > > > > @code_warntype test2(r)   # return type Float64 is inferred
> > > > > 
> > > > > 
> > > > > This caused a problem for me, beyond execution speed: I used
> > > > > a generator to create the elements for a comprehension, since
> > > > > the type was not inferred the zero-element could not be
> > > > > created.
> > > > > 
> > > > > Is this a known issue?
> > > > > 
> > > > 
> > > > 
> > > 
> >