Re: [julia-users] Re: Dict get destroys global variable?

2016-09-22 Thread K leo
Thank you Isaiah for pointing that out!

On Friday, September 23, 2016 at 12:06:17 PM UTC+8, Isaiah wrote:
>
> Use `global a = ...`
> Please see: 
> http://docs.julialang.org/en/latest/manual/variables-and-scoping/#hard-local-scope
>
> global variables are only inherited for reading but not for writing
>
>
>  
>


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Tsur Herman
I can a see a point in what you say .. eltype of a function should be the
return type of that function if it can be inferred.
Because an array is just a special kind of function with a special notation.


On Friday, 23 September 2016, Steven G. Johnson 
wrote:

>
>
> On Thursday, September 22, 2016 at 6:10:29 PM UTC-4, Tsur Herman wrote:
>>
>> The real problem is that eltype(t^2 for t in rand(10)) returns Any.
>>
>>
>> that is not a problem (t^2 for t in rand(10)) is a generator its element
>> type is Any which means a pointer to something complex.
>>
>>>
> It is a problem, because it means that the result type of sum cannot be
> inferred.
>
> We could use type inference on the function t -> t^2 (which is buried in
> the generator) to determine a more specific eltype.
>


Re: [julia-users] Re: Dict get destroys global variable?

2016-09-22 Thread Isaiah Norton
Use `global a = ...`
Please see:
http://docs.julialang.org/en/latest/manual/variables-and-scoping/#hard-local-scope

global variables are only inherited for reading but not for writing




On Fri, Sep 23, 2016 at 12:00 AM, K leo  wrote:

> Sorry, this is not related to Dict at all.  If I replace the "a=get..."
> statement with simply "a=2", the global variable is no longer accessible.
> What is wrong?
>
>
> On Friday, September 23, 2016 at 11:52:24 AM UTC+8, K leo wrote:
>>
>> Calling "get" anywhere in a function makes a global variable undefined.
>> Can anyone please help explaining the following?
>>
>> 1) without calling "get", the global variable is fine:
>>
>> a=0
>>>
>>> Dicta = Dict{Int,Int}()
>>>
>>> function testGlobal()
>>>
>>> println(a)
>>>
>>> merge!(Dicta, Dict(1=>1))
>>>
>>> # a=get(Dicta, 1, 0)
>>>
>>> println(a)
>>>
>>> nothing
>>>
>>> end
>>>
>>>
>>> julia> testGlobal()
>> 0
>> 0
>>
>> 2) calling "get" (same code as above except uncommenting the "get"
>> statement) and the global variable becomes undefined:
>>
>> a=0
>>>
>>> Dicta = Dict{Int,Int}()
>>>
>>> function testGlobal()
>>>
>>> println(a)
>>>
>>> merge!(Dicta, Dict(1=>1))
>>>
>>> a=get(Dicta, 1, 0)
>>>
>>> println(a)
>>>
>>> nothing
>>>
>>> end
>>>
>>>
>>> julia> testGlobal()
>> ERROR: UndefVarError: a not defined
>>  in testGlobal() at /xxx/testType.jl:4
>>
>> 3) not calling the first println, the code works, but the global a is not
>> set:
>>
>> a=0
>>>
>>> Dicta = Dict{Int,Int}()
>>>
>>> function testGlobal()
>>>
>>> # println(a)
>>>
>>> merge!(Dicta, Dict(1=>1))
>>>
>>> a=get(Dicta, 1, 0)
>>>
>>> println(a)
>>>
>>> nothing
>>>
>>> end
>>>
>>>
>>>
>> julia> testGlobal()
>> 1
>>
>>
>>


[julia-users] Re: Dict get destroys global variable?

2016-09-22 Thread K leo
Sorry, this is not related to Dict at all.  If I replace the "a=get..." 
statement with simply "a=2", the global variable is no longer accessible. 
 What is wrong?

On Friday, September 23, 2016 at 11:52:24 AM UTC+8, K leo wrote:
>
> Calling "get" anywhere in a function makes a global variable undefined. 
>  Can anyone please help explaining the following?
>
> 1) without calling "get", the global variable is fine:
>
> a=0
>>
>> Dicta = Dict{Int,Int}()
>>
>> function testGlobal()
>>
>> println(a)
>>
>> merge!(Dicta, Dict(1=>1))
>>
>> # a=get(Dicta, 1, 0)
>>
>> println(a)  
>>
>> nothing
>>
>> end
>>
>>
>> julia> testGlobal()
> 0
> 0
>
> 2) calling "get" (same code as above except uncommenting the "get" 
> statement) and the global variable becomes undefined:
>
> a=0
>>
>> Dicta = Dict{Int,Int}()
>>
>> function testGlobal()
>>
>> println(a)
>>
>> merge!(Dicta, Dict(1=>1))
>>
>> a=get(Dicta, 1, 0)
>>
>> println(a)  
>>
>> nothing
>>
>> end
>>
>>
>> julia> testGlobal()
> ERROR: UndefVarError: a not defined
>  in testGlobal() at /xxx/testType.jl:4
>
> 3) not calling the first println, the code works, but the global a is not 
> set:
>
> a=0
>>
>> Dicta = Dict{Int,Int}()
>>
>> function testGlobal()
>>
>> # println(a)
>>
>> merge!(Dicta, Dict(1=>1))
>>
>> a=get(Dicta, 1, 0)
>>
>> println(a)  
>>
>> nothing
>>
>> end
>>
>>
>>
> julia> testGlobal()
> 1
>
>
>

[julia-users] Dict get destroys global variable?

2016-09-22 Thread K leo
Calling "get" anywhere in a function makes a global variable undefined. 
 Can anyone please help explaining the following?

1) without calling "get", the global variable is fine:

a=0
>
> Dicta = Dict{Int,Int}()
>
> function testGlobal()
>
> println(a)
>
> merge!(Dicta, Dict(1=>1))
>
> # a=get(Dicta, 1, 0)
>
> println(a)  
>
> nothing
>
> end
>
>
> julia> testGlobal()
0
0

2) calling "get" (same code as above except uncommenting the "get" 
statement) and the global variable becomes undefined:

a=0
>
> Dicta = Dict{Int,Int}()
>
> function testGlobal()
>
> println(a)
>
> merge!(Dicta, Dict(1=>1))
>
> a=get(Dicta, 1, 0)
>
> println(a)  
>
> nothing
>
> end
>
>
> julia> testGlobal()
ERROR: UndefVarError: a not defined
 in testGlobal() at /xxx/testType.jl:4

3) not calling the first println, the code works, but the global a is not 
set:

a=0
>
> Dicta = Dict{Int,Int}()
>
> function testGlobal()
>
> # println(a)
>
> merge!(Dicta, Dict(1=>1))
>
> a=get(Dicta, 1, 0)
>
> println(a)  
>
> nothing
>
> end
>
>
>
julia> testGlobal()
1




[julia-users] How does promote_op work?

2016-09-22 Thread Sheehan Olver

The subject says it all: it looks like one can override promote_op to 
support the following behaviour:

*julia> **import Base.+*


*julia> **immutable Foo end*

WARNING: Method definition (::Type{Main.Foo})() in module Main at REPL[5]:1 
overwritten at REPL[10]:1.


*julia> **+(a::Foo,b::Foo) = 1.0*

*+ (generic function with 164 methods)*


*julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = Float64*


*julia> **Array(Foo,0) + Array(Foo,0)*

*0-element Array{Float64,1}*


Is this documented somewhere?  What if we want to override /, -, etc., is 
the solution to write a promote_op for each case?


Re: [julia-users] how to get rid of axis in Plots ?

2016-09-22 Thread Roger Herikstad
Just filed an issue #500  . 
Thanks for looking into this!

On Thursday, September 22, 2016 at 6:34:49 PM UTC+8, Tom Breloff wrote:
>
> This isn't configurable right now, but I've wanted the same thing. Can you 
> open an issue and I'll add when I have a couple minutes? I think the 
> plotlyjs backend does this by default though. 
>
> On Thursday, September 22, 2016, Roger Herikstad  > wrote:
>
>> Hey,
>>  Similar question; how do I remove only the upper and right border? My 
>> personal preference is to only show the left and bottom border. In Winston, 
>> I'd do this
>>
>> p = Winston.plot([1,2,3],[3,4,5])
>> Winston.setattr(p.frame, "draw_axis", false)
>>  
>> Thanks!
>>
>>
>> On Wednesday, June 29, 2016 at 2:15:47 AM UTC+8, Henri Girard wrote:
>>>
>>> I just found in the same time ... Thank you :)
>>>
>>> Le mardi 28 juin 2016 17:48:34 UTC+2, Henri Girard a écrit :

 I don't see any command to get rid of axis in Plots or hide them
 I tried axis=false axes =false idem with none but nothing works...



[julia-users] Re: Broadcast slices

2016-09-22 Thread Steven G. Johnson
At some point, it is simpler to just write loops than to try and express a 
complicated operation in terms of higher-order functions like broadcast.


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Steven G. Johnson


On Thursday, September 22, 2016 at 6:10:29 PM UTC-4, Tsur Herman wrote:
>
> The real problem is that eltype(t^2 for t in rand(10)) returns Any.
>
>
> that is not a problem (t^2 for t in rand(10)) is a generator its element 
> type is Any which means a pointer to something complex.
>
>>
It is a problem, because it means that the result type of sum cannot be 
inferred.

We could use type inference on the function t -> t^2 (which is buried in 
the generator) to determine a more specific eltype. 


Re: [julia-users] Is FMA/Muladd Working Here?

2016-09-22 Thread Chris Rackauckas
So, in the end, is `@fastmath` supposed to be adding FMA? Should I open an 
issue?

On Wednesday, September 21, 2016 at 7:11:14 PM UTC-7, Yichao Yu wrote:
>
> On Wed, Sep 21, 2016 at 9:49 PM, Erik Schnetter  > wrote: 
> > I confirm that I can't get Julia to synthesize a `vfmadd` instruction 
> > either... Sorry for sending you on a wild goose chase. 
>
> -march=haswell does the trick for C (both clang and gcc) 
> the necessary bit for the machine ir optimization (this is not a llvm 
> ir optimization pass) to do this is llc options -mcpu=haswell and 
> function attribute unsafe-fp-math=true. 
>
> > 
> > -erik 
> > 
> > On Wed, Sep 21, 2016 at 9:33 PM, Yichao Yu  > wrote: 
> >> 
> >> On Wed, Sep 21, 2016 at 9:29 PM, Erik Schnetter  > 
> >> wrote: 
> >> > On Wed, Sep 21, 2016 at 9:22 PM, Chris Rackauckas  > 
> >> > wrote: 
> >> >> 
> >> >> I'm not seeing `@fastmath` apply fma/muladd. I rebuilt the sysimg 
> and 
> >> >> now 
> >> >> I get results where g and h apply muladd/fma in the native code, but 
> a 
> >> >> new 
> >> >> function k which is `@fastmath` inside of f does not apply 
> muladd/fma. 
> >> >> 
> >> >> 
> >> >> 
> https://gist.github.com/ChrisRackauckas/b239e33b4b52bcc28f3922c673a25910 
> >> >> 
> >> >> Should I open an issue? 
> >> > 
> >> > 
> >> > In your case, LLVM apparently thinks that `x + x + 3` is faster to 
> >> > calculate 
> >> > than `2x+3`. If you use a less round number than `2` multiplying `x`, 
> >> > you 
> >> > might see a different behaviour. 
> >> 
> >> I've personally never seen llvm create fma from mul and add. We might 
> >> not have the llvm passes enabled if LLVM is capable of doing this at 
> >> all. 
> >> 
> >> > 
> >> > -erik 
> >> > 
> >> > 
> >> >> Note that this is on v0.6 Windows. On Linux the sysimg isn't 
> rebuilding 
> >> >> for some reason, so I may need to just build from source. 
> >> >> 
> >> >> On Wednesday, September 21, 2016 at 6:22:06 AM UTC-7, Erik Schnetter 
> >> >> wrote: 
> >> >>> 
> >> >>> On Wed, Sep 21, 2016 at 1:56 AM, Chris Rackauckas <
> rack...@gmail.com> 
> >> >>> wrote: 
> >>  
> >>  Hi, 
> >>    First of all, does LLVM essentially fma or muladd expressions 
> like 
> >>  `a1*x1 + a2*x2 + a3*x3 + a4*x4`? Or is it required that one 
> >>  explicitly use 
> >>  `muladd` and `fma` on these types of instructions (is there a 
> macro 
> >>  for 
> >>  making this easier)? 
> >> >>> 
> >> >>> 
> >> >>> Yes, LLVM will use fma machine instructions -- but only if they 
> lead 
> >> >>> to 
> >> >>> the same round-off error as using separate multiply and add 
> >> >>> instructions. If 
> >> >>> you do not care about the details of conforming to the IEEE 
> standard, 
> >> >>> then 
> >> >>> you can use the `@fastmath` macro that enables several 
> optimizations, 
> >> >>> including this one. This is described in the manual 
> >> >>> 
> >> >>> <
> http://docs.julialang.org/en/release-0.5/manual/performance-tips/#performance-annotations>.
>  
>
> >> >>> 
> >> >>> 
> >>    Secondly, I am wondering if my setup is no applying these 
> >>  operations 
> >>  correctly. Here's my test code: 
> >>  
> >>  f(x) = 2.0x + 3.0 
> >>  g(x) = muladd(x,2.0, 3.0) 
> >>  h(x) = fma(x,2.0, 3.0) 
> >>  
> >>  @code_llvm f(4.0) 
> >>  @code_llvm g(4.0) 
> >>  @code_llvm h(4.0) 
> >>  
> >>  @code_native f(4.0) 
> >>  @code_native g(4.0) 
> >>  @code_native h(4.0) 
> >>  
> >>  Computer 1 
> >>  
> >>  Julia Version 0.5.0-rc4+0 
> >>  Commit 9c76c3e* (2016-09-09 01:43 UTC) 
> >>  Platform Info: 
> >>    System: Linux (x86_64-redhat-linux) 
> >>    CPU: Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz 
> >>    WORD_SIZE: 64 
> >>    BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Haswell) 
> >>    LAPACK: libopenblasp.so.0 
> >>    LIBM: libopenlibm 
> >>    LLVM: libLLVM-3.7.1 (ORCJIT, broadwell) 
> >> >>> 
> >> >>> 
> >> >>> This looks good, the "broadwell" architecture that LLVM uses should 
> >> >>> imply 
> >> >>> the respective optimizations. Try with `@fastmath`. 
> >> >>> 
> >> >>> -erik 
> >> >>> 
> >> >>> 
> >> >>> 
> >> >>> 
> >>  
> >>  (the COPR nightly on CentOS7) with 
> >>  
> >>  [crackauc@crackauc2 ~]$ lscpu 
> >>  Architecture:  x86_64 
> >>  CPU op-mode(s):32-bit, 64-bit 
> >>  Byte Order:Little Endian 
> >>  CPU(s):16 
> >>  On-line CPU(s) list:   0-15 
> >>  Thread(s) per core:1 
> >>  Core(s) per socket:8 
> >>  Socket(s): 2 
> >>  NUMA node(s):  2 
> >>  Vendor ID: GenuineIntel 
> >>  CPU family:6 
> >>  Model: 79 
> >>  Model name:Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz 
> >>  Stepping:  1 
> >>  CPU MHz:   

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Tsur Herman
The real problem is that eltype(t^2 for t in rand(10)) returns Any.


that is not a problem (t^2 for t in rand(10)) is a generator its element 
type is Any which means a pointer to something complex.


On Friday, September 23, 2016 at 12:50:18 AM UTC+3, Steven G. Johnson wrote:
>
> I don't think the empty case should be the problem.  If it can't infer the 
> type, sum just throws an error.  So test1(r) actually always returns the 
> same type for r::Array{Float64} in any case where it returns a value at al.
>
> The real problem is that eltype(t^2 for t in rand(10)) returns Any.
>


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Tsur Herman
By the way my test3 functions is super fast

 @time test3(r)
  0.32 seconds (4 allocations: 160 bytes)

  

On Friday, September 23, 2016 at 12:48:50 AM UTC+3, Tsur Herman wrote:
>
>
> On my side both function perform equally. although test2 had to be timed 
> twice to get to the same performance.
>
> julia> test2(x)= sum( [t^2 for t in x] )
>
> julia> @time test2(r)
>   0.017423 seconds (13.22 k allocations: 1.339 MB)
>
> julia> @time test2(r)
>   0.000332 seconds (9 allocations: 781.531 KB)
>
> I think the discrepancy  comes from the JITing process because if I time 
> it without using  the macro @time, it works from the first run.
>
> julia> test2(x)= sum( [t^2 for t in x] )
> WARNING: Method definition test2(Any) in module Main at REPL[68]:1 
> overwritten at REPL[71]:1.
> test2 (generic function with 1 method)
>
> julia> tic();for i=1:1 ; test2(r);end;toc()/1
> elapsed time: 3.090764498 seconds
> 0.0003090764498
>
> About the memory footprint -> test2 first constructs the inner vector then 
> calls sum.
>
> since the type was not inferred the zero-element could not be created.
>>
> The sum of an empty set or vector is undefined it is not zero.
> you can rewrite it in a more explicit way
>
> test3(r) = begin 
> total = Float64(0);
>  for t in r total+=t ;end;end
>
>
>
>
>
>
> On Thursday, September 22, 2016 at 10:50:39 PM UTC+3, Patrick Kofod 
> Mogensen wrote:
>>
>> I've seen the same, and the answer I got at the JuliaLang gitter channel 
>> was that it could not be inferred because r could be of length 0, and in 
>> that case, the return type could not be inferred. My Julia-fu is too weak 
>> to then explain why the comprehension would be able to infer the return 
>> type.
>>
>> On Thursday, September 22, 2016 at 9:27:37 PM UTC+2, Stefan Karpinski 
>> wrote:
>>>
>>> I see the same, yet:
>>>
>>> julia> r = rand(10^5);
>>>
>>> julia> @time test1(r)
>>>   0.000246 seconds (7 allocations: 208 bytes)
>>> 33375.54531253989
>>>
>>> julia> @time test2(r)
>>>   0.001029 seconds (7 allocations: 781.500 KB)
>>> 33375.54531253966
>>>
>>>
>>> So test1 is efficient, despite the codewarn output. Not sure what's up.
>>>
>>> On Thu, Sep 22, 2016 at 2:21 PM, Christoph Ortner >> > wrote:
>>>
 I hope that there is something I am missing, or making a mistake in the 
 following example: 

 r = rand(10)
 test1(r) = sum( t^2 for t in r )
 test2(r)= sum( [t^2 for t in r] )
 @code_warntype test1(r)   # return type Any is inferred
 @code_warntype test2(r)   # return type Float64 is inferred


 This caused a problem for me, beyond execution speed: I used a 
 generator to create the elements for a comprehension, since the type was 
 not inferred the zero-element could not be created.

 Is this a known issue?

>>>
>>>

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Steven G. Johnson
I don't think the empty case should be the problem.  If it can't infer the 
type, sum just throws an error.  So test1(r) actually always returns the 
same type for r::Array{Float64} in any case where it returns a value at al.

The real problem is that eltype(t^2 for t in rand(10)) returns Any.


Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Tsur Herman

On my side both function perform equally. although test2 had to be timed 
twice to get to the same performance.

julia> test2(x)= sum( [t^2 for t in x] )

julia> @time test2(r)
  0.017423 seconds (13.22 k allocations: 1.339 MB)

julia> @time test2(r)
  0.000332 seconds (9 allocations: 781.531 KB)

I think the discrepancy  comes from the JITing process because if I time it 
without using  the macro @time, it works from the first run.

julia> test2(x)= sum( [t^2 for t in x] )
WARNING: Method definition test2(Any) in module Main at REPL[68]:1 
overwritten at REPL[71]:1.
test2 (generic function with 1 method)

julia> tic();for i=1:1 ; test2(r);end;toc()/1
elapsed time: 3.090764498 seconds
0.0003090764498

About the memory footprint -> test2 first constructs the inner vector then 
calls sum.

since the type was not inferred the zero-element could not be created.
>
The sum of an empty set or vector is undefined it is not zero.
you can rewrite it in a more explicit way

test3(r) = begin 
total = Float64(0);
 for t in r total+=t ;end;end






On Thursday, September 22, 2016 at 10:50:39 PM UTC+3, Patrick Kofod 
Mogensen wrote:
>
> I've seen the same, and the answer I got at the JuliaLang gitter channel 
> was that it could not be inferred because r could be of length 0, and in 
> that case, the return type could not be inferred. My Julia-fu is too weak 
> to then explain why the comprehension would be able to infer the return 
> type.
>
> On Thursday, September 22, 2016 at 9:27:37 PM UTC+2, Stefan Karpinski 
> wrote:
>>
>> I see the same, yet:
>>
>> julia> r = rand(10^5);
>>
>> julia> @time test1(r)
>>   0.000246 seconds (7 allocations: 208 bytes)
>> 33375.54531253989
>>
>> julia> @time test2(r)
>>   0.001029 seconds (7 allocations: 781.500 KB)
>> 33375.54531253966
>>
>>
>> So test1 is efficient, despite the codewarn output. Not sure what's up.
>>
>> On Thu, Sep 22, 2016 at 2:21 PM, Christoph Ortner  
>> wrote:
>>
>>> I hope that there is something I am missing, or making a mistake in the 
>>> following example: 
>>>
>>> r = rand(10)
>>> test1(r) = sum( t^2 for t in r )
>>> test2(r)= sum( [t^2 for t in r] )
>>> @code_warntype test1(r)   # return type Any is inferred
>>> @code_warntype test2(r)   # return type Float64 is inferred
>>>
>>>
>>> This caused a problem for me, beyond execution speed: I used a generator 
>>> to create the elements for a comprehension, since the type was not inferred 
>>> the zero-element could not be created.
>>>
>>> Is this a known issue?
>>>
>>
>>

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Patrick Kofod Mogensen
There might be a perfectly valid explanation for this, but this also 
surprises me. 
r = rand(10)
f(x) =  x^2
test1(r) = sum( f(x) for t in r )
test2(r) = sum( [f(x) for t in r] )
@code_warntype test1(r)   # return type Any is inferred
@code_warntype test2(r)   # return type Float64 is inferred

g(x)::Float64 =  x^2
test3(r) = sum( g(x) for t in r )
test4(r) = sum( [g(x) for t in r] )
@code_warntype test3(r)   # return type Any is inferred
@code_warntype test4(r)   # return type Any is inferred

Why would the return type annotation in g(x) (compared to f(x)) ruin 
inference for test4? I might be doing something stupid, but...

On Thursday, September 22, 2016 at 11:30:19 PM UTC+2, Christoph Ortner 
wrote:
>
> would it maybe be possible to introduce a macro like @inbounds that 
> somehow turns off the check that the generator is empty?
>


Re: [julia-users] Re: Adding publications easier

2016-09-22 Thread Tony Kelman
Yeah I didn't realize the readme was suggesting that either, sorry. We should 
automate the complicated part on our side.

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Christoph Ortner
would it maybe be possible to introduce a macro like @inbounds that somehow 
turns off the check that the generator is empty?


[julia-users] PkgDev.tag issues

2016-09-22 Thread Tony Kelman
What does Pkg.status() say? What operating system are you using? What is git 
status in METADATA and the package repo?

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Christoph Ortner
Yeah, this definitely matters for performance. It is a real shame since the 
generators are so elegant to use.

inner1(R, i) = sum( R[j,i] for j = 1:size(R,1) )
inner2(R, i) = sum( [R[j,i] for j = 1:size(R,1)] )

function test(R, inner)
n = [ inner(R, i)^2 for i = 1:size(R,2) ]
N = length(n)
s = 0.0
for i = 1:N, j = max(1, i-5):min(N, i+5)
s += n[i] * n[j]
end 
return s
end 

R = rand(10, 1000);


@time test(R, inner1)
@time test(R, inner1)
@time test(R, inner1)  
#   0.002539 seconds (76.02 k allocations: 1.396 MB)
#   0.002264 seconds (76.02 k allocations: 1.396 MB)
#   0.002094 seconds (76.02 k allocations: 1.396 MB)


@time test(R, inner2)
@time test(R, inner2)
@time test(R, inner2)
#   0.000131 seconds (4.01 k allocations: 242.547 KB)
#   0.000106 seconds (4.01 k allocations: 242.547 KB)
#   0.000104 seconds (4.01 k allocations: 242.547 KB)





Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Christoph Ortner
sum( Float64[] ) = 0.0 ?


Re: [julia-users] Implementing compilers or interpreters in Julia?

2016-09-22 Thread Tim Besard
Op donderdag 22 september 2016 12:55:01 UTC-4 schreef Isaiah:

> - Cxx.jl, if you want to target LLVM
>

I've been working on LLVM.jl to make this even easier:
- https://github.com/maleadt/LLVM.jl
- irbuilder + execution example: 
https://github.com/maleadt/LLVM.jl/blob/master/examples/sum.jl

Is going to be used to power CUDAnative, see e.g. 
https://gist.github.com/maleadt/ec07509c880903efce0d2fed6bded416

Tim



Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Patrick Kofod Mogensen
I've seen the same, and the answer I got at the JuliaLang gitter channel 
was that it could not be inferred because r could be of length 0, and in 
that case, the return type could not be inferred. My Julia-fu is too weak 
to then explain why the comprehension would be able to infer the return 
type.

On Thursday, September 22, 2016 at 9:27:37 PM UTC+2, Stefan Karpinski wrote:
>
> I see the same, yet:
>
> julia> r = rand(10^5);
>
> julia> @time test1(r)
>   0.000246 seconds (7 allocations: 208 bytes)
> 33375.54531253989
>
> julia> @time test2(r)
>   0.001029 seconds (7 allocations: 781.500 KB)
> 33375.54531253966
>
>
> So test1 is efficient, despite the codewarn output. Not sure what's up.
>
> On Thu, Sep 22, 2016 at 2:21 PM, Christoph Ortner  > wrote:
>
>> I hope that there is something I am missing, or making a mistake in the 
>> following example: 
>>
>> r = rand(10)
>> test1(r) = sum( t^2 for t in r )
>> test2(r)= sum( [t^2 for t in r] )
>> @code_warntype test1(r)   # return type Any is inferred
>> @code_warntype test2(r)   # return type Float64 is inferred
>>
>>
>> This caused a problem for me, beyond execution speed: I used a generator 
>> to create the elements for a comprehension, since the type was not inferred 
>> the zero-element could not be created.
>>
>> Is this a known issue?
>>
>
>

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Christoph Ortner
I didn't actually test performance - the problem for me was re-use of the 
output of test1. But it is hard to reproduce this with a simple example. 
The same code works in some situations and not in others - I haven't yet 
found out why.




[julia-users] Re: ANN: Julia v0.5.0 released!

2016-09-22 Thread Jeffrey Sarnoff
whoo-hoo!

On Tuesday, September 20, 2016 at 5:08:44 AM UTC-4, Tony Kelman wrote:
>
> At long last, we can announce the final release of Julia 0.5.0! See the 
> release notes at 
> https://github.com/JuliaLang/julia/blob/release-0.5/NEWS.md for more 
> details, and expect a blog post with some highlights within the next few 
> days. Binaries are available from the usual place 
> , and please report all issues to either 
> the issue tracker  or email 
> the julia-users list. Don't CC julia-news, which is intended to be 
> low-volume, if you reply to this message.
>
> Many thanks to all the contributors, package authors, users and reporters 
> of issues who helped us get here. We'll be releasing regular monthly bugfix 
> backports from the 0.5.x line, while major feature work is ongoing on 
> master for 0.6-dev. Enjoy!
>
>
> We haven't made the change just yet, but to package authors: please be 
> aware that `release` on Travis CI for `language: julia` will change meaning 
> to 0.5 shortly. So if you want to continue supporting Julia 0.4 in your 
> package, please update your .travis.yml file to have an "- 0.4" entry under 
> `julia:` versions. When you want to drop support for Julia 0.4, update your 
> REQUIRE file to list `julia 0.5` as a lower bound, and the next time you 
> tag the package be sure to increment the minor or major version number via 
> `PkgDev.tag(pkgname, :minor)`.
>
>

Re: [julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Stefan Karpinski
I see the same, yet:

julia> r = rand(10^5);

julia> @time test1(r)
  0.000246 seconds (7 allocations: 208 bytes)
33375.54531253989

julia> @time test2(r)
  0.001029 seconds (7 allocations: 781.500 KB)
33375.54531253966


So test1 is efficient, despite the codewarn output. Not sure what's up.

On Thu, Sep 22, 2016 at 2:21 PM, Christoph Ortner <
christophortn...@gmail.com> wrote:

> I hope that there is something I am missing, or making a mistake in the
> following example:
>
> r = rand(10)
> test1(r) = sum( t^2 for t in r )
> test2(r)= sum( [t^2 for t in r] )
> @code_warntype test1(r)   # return type Any is inferred
> @code_warntype test2(r)   # return type Float64 is inferred
>
>
> This caused a problem for me, beyond execution speed: I used a generator
> to create the elements for a comprehension, since the type was not inferred
> the zero-element could not be created.
>
> Is this a known issue?
>


[julia-users] Broadcast slices

2016-09-22 Thread Brandon Taylor
Is there a way to slice two or more arrays along the same dimension, 
broadcast a function over the slices, then put the results back together 
again along the sliced dimension? Essentially, broadcast_slices?


Re: [julia-users] Re: ANN: A potential new Discourse-based Julia forum

2016-09-22 Thread Stefan Karpinski
Yikes... recycled static IP address :|

On Thu, Sep 22, 2016 at 1:02 PM, mmh  wrote:

> http://julia.malmaud.com
> 
>
> Now links to some random dudes website :P
>
> On Monday, September 19, 2016 at 3:39:34 PM UTC-4, Jonathan Malmaud wrote:
>>
>> Discourse lives!
>> On Mon, Sep 19, 2016 at 3:01 PM Stefan Karpinski 
>> wrote:
>>
>>> I got the go ahead from Jeff and Viral to give this a try, then it
>>> didn't end up panning out. It would still be worth a try, imo.
>>>
>>> On Sat, Sep 17, 2016 at 11:55 AM, mmh  wrote:
>>>
 Hi Jonathan,

 Seems like this has kind of burnt out. Is there still an impetus on a
 transition.

 On Saturday, September 19, 2015 at 8:16:36 PM UTC-4, Jonathan Malmaud
 wrote:

> Hi all,
> There's been some chatter about maybe switching to a new, more modern
> forum platform for Julia that could potentially subsume julia-users,
> julia-dev, julia-stats, julia-gpu, and julia-jobs.   I created
> http://julia.malmaud.com for us to try one out and see if we like it.
> Please check it out and leave feedback. All the old posts from julia-users
> have already been imported to it.
>
> It is using Discourse , the same forum
> software used for the forums of Rust ,
> BoingBoing, and some other big sites. Benefits over Google Groups include
> better support for topic tagging, community moderation features,  Markdown
> (and hence syntax highlighting) in messages, inline previews of linked-to
> Github issues, better mobile support, and more options for controlling 
> when
> and what you get emailed. The Discourse website
>  does a better job of summarizing the
> advantages than I could.
>
> To get things started, MIke Innes suggested having a topic on what we
> plan on working on this coming wee
> k.
> I think that's a great idea.
>
> Just to be clear, this isn't "official" in any sense - it's just to
> kickstart the discussion.
>
> -Jon
>
>
>
>>>


Re: [julia-users] Re: Adding publications easier

2016-09-22 Thread Magnus Röding
Thanks for the quick response in changing the workflow, now it's more at my 
(and hopefully others') level.

Den torsdag 22 september 2016 kl. 17:56:13 UTC+2 skrev Stefan Karpinski:
>
> I didn't realize that there was a README that actually tells you to do all 
> of that. I've opened an issue 
>  since 
> there's no reason someone should have to install Jekyll in order to submit 
> a publication that uses Julia – that's way over the top.
>
> On Thu, Sep 22, 2016 at 11:48 AM, Stefan Karpinski  > wrote:
>
>> You don't need Jekyll installed to edit markdown files. You can even edit 
>> files directly in the web and preview the rendering. Admittedly, installing 
>> Jekyll is a pain and seems to have gotten worse over time somehow, but you 
>> don't need to do any of that to submit a publication.
>>
>> On Thu, Sep 22, 2016 at 8:08 AM, Magnus Röding > > wrote:
>>
>>> I have now made a new attempt, ending up having everything installed and 
>>> seemingly working on an Ubuntu 16.04 system. My workflow so far is:
>>>
>>> - git clone https:// to local repo (I have a github account)
>>> - edit julia.bib and _EDIT_ME_index.md according to instructions
>>> - run 'make' in publications directory
>>> - run 'bundle exec jekyll build' in main repo directory
>>> - adding the modified files by 'git add *'
>>>
>>> So, off-topic question as far as Julia goes, but what to do now? I 
>>> realize I'm supposed to commit, annotate, request-pull, and go nuts, but in 
>>> which order?
>>>
>>> Thanks in advance, not my cup of tea this, so if anyone can help tx :-)
>>>
>>> Den tisdag 20 september 2016 kl. 21:14:21 UTC+2 skrev Tony Kelman:
>>>
 What do you propose? Github is about as simple as we can do, 
 considering also the complexity of maintaining something from thr project 
 side. There are plenty of people around the community who are happy to 
 walk 
 you through the process of making a pull request, and if it's not 
 explained 
 in enoug detail then we can add more instructions if it would help. What 
 have you tried so far?
>>>
>>>
>>
>

[julia-users] Generators vs Comprehensions, Type-stability?

2016-09-22 Thread Christoph Ortner
I hope that there is something I am missing, or making a mistake in the 
following example: 

r = rand(10)
test1(r) = sum( t^2 for t in r )
test2(r)= sum( [t^2 for t in r] )
@code_warntype test1(r)   # return type Any is inferred
@code_warntype test2(r)   # return type Float64 is inferred


This caused a problem for me, beyond execution speed: I used a generator to 
create the elements for a comprehension, since the type was not inferred 
the zero-element could not be created.

Is this a known issue?


Re: [julia-users] Julia's current state of web-programming

2016-09-22 Thread Adrian Salceanu
Escher is pretty cool, but it’s more about data interactions and
visualizations (dashboards?), rather than building full featured web apps
and products.

I’m working on Genie: https://github.com/essenciary/Genie.jl a full stack
MVC web framework for Julia, in the spirit of Rails or Django.

It now runs smoothly on both 0.4 and 0.5 - it’s still WIP, but it’s got
pretty much all the necessary features already (ORM, templating system,
authentication, authorization, caching, migrations, model validators, etc).
Also, it now takes full advantage of Julia’s parallel programming
capabilities, using multiple cores to handle requests, which is pretty
cool.

If you check here: http://genieframework.com/packages (built with Genie
btw) you’ll find a few more options - like Merly and Bukdu.

Other than that you have the low(er) level components: Mux, HttpServer,
WebSockets.

On September 22, 2016 at 17:44:11, Alexey Cherkaev (
alexey.cherk...@gmail.com) wrote:

Hi all,

What is the current state of web programming with Julia?

http://juliawebstack.org/ seems quite out of date (still suggesting to use
Morsel, which is marked as deprecated).

Escher looks quite nice (https://github.com/shashi/Escher.jl), but still
fails to build on both 0.4 and 0.5 versions (but it does seem to be
actively developed).

Thanks!
Alexey


Re: [julia-users] Re: ANN: A potential new Discourse-based Julia forum

2016-09-22 Thread mmh
http://julia.malmaud.com 


Now links to some random dudes website :P

On Monday, September 19, 2016 at 3:39:34 PM UTC-4, Jonathan Malmaud wrote:
>
> Discourse lives! 
> On Mon, Sep 19, 2016 at 3:01 PM Stefan Karpinski  > wrote:
>
>> I got the go ahead from Jeff and Viral to give this a try, then it didn't 
>> end up panning out. It would still be worth a try, imo.
>>
>> On Sat, Sep 17, 2016 at 11:55 AM, mmh  
>> wrote:
>>
>>> Hi Jonathan,
>>>
>>> Seems like this has kind of burnt out. Is there still an impetus on a 
>>> transition. 
>>>
>>> On Saturday, September 19, 2015 at 8:16:36 PM UTC-4, Jonathan Malmaud 
>>> wrote:
>>>
 Hi all,
 There's been some chatter about maybe switching to a new, more modern 
 forum platform for Julia that could potentially subsume julia-users, 
 julia-dev, julia-stats, julia-gpu, and julia-jobs.   I created 
 http://julia.malmaud.com for us to try one out and see if we like it. 
 Please check it out and leave feedback. All the old posts from julia-users 
 have already been imported to it.

 It is using Discourse , the same forum 
 software used for the forums of Rust , 
 BoingBoing, and some other big sites. Benefits over Google Groups include 
 better support for topic tagging, community moderation features,  Markdown 
 (and hence syntax highlighting) in messages, inline previews of linked-to 
 Github issues, better mobile support, and more options for controlling 
 when 
 and what you get emailed. The Discourse website 
  does a better job of summarizing the 
 advantages than I could.

 To get things started, MIke Innes suggested having a topic on what we 
 plan on working on this coming wee 
 k.
  
 I think that's a great idea.

 Just to be clear, this isn't "official" in any sense - it's just to 
 kickstart the discussion. 

 -Jon



>>

Re: [julia-users] Implementing compilers or interpreters in Julia?

2016-09-22 Thread Isaiah Norton
Any answer if going to be fairly opinion-based. That said, here are some
opinions :)

some advantages:
- interactive development
- performance
- Cxx.jl, if you want to target LLVM
- multiple dispatch

some disadvantages:
- lack of ML-style pattern-matching -- though multiple dispatch on node
type covers much of how this is used (and
https://github.com/kmsquire/Match.jl might help too)
- for an interpreter, lack of static compilation (your language would need
to carry along Julia and LLVM machinery)

Some examples of compiler-related-things written in Julia:

- Intel's ParallelAccerator compiler:
https://github.com/IntelLabs/ParallelAccelerator.jl
- Julia's type inference:
https://github.com/JuliaLang/julia/blob/master/base/inference.jl
- Julia parser: https://github.com/JuliaLang/JuliaParser.jl

- "green-fairy", an abstract interpreter:
https://github.com/carnaval/green-fairy
   (MS thesis project, see PDF report. still WIP)
- parsers for object-file formats (DWARF, COFF, MachO, ...):
https://github.com/Keno?tab=repositories=source
- https://github.com/JuliaGPU/CUDAnative.jl
- https://github.com/swadey/LispSyntax.jl
- https://github.com/andrewcooke/ParserCombinator.jl
- https://github.com/abeschneider/PEGParser.jl
- BF interpreter: https://github.com/johnmyleswhite/Brainfuck.jl



On Wed, Sep 21, 2016 at 8:29 PM, Sebastian the Eight <
imperator.ozymand...@gmail.com> wrote:

> I'm curious how adept Julia is for writing compilers/interpreters? It's
> fairly easy to find example toy compilers for most other languages (C,
> OCaml, Ruby etc), but Julia seems to be lacking any examples except for its
> own. I was wondering if there are advantages or disadvantages for using
> Julia in this way, other than being a relatively new language


[julia-users] Re: julia-i18n: Translators and reviewer needed!

2016-09-22 Thread Ismael Venegas Castelló
Dear Patrick,

Basically Transifex got us covered there, it has a nifty feature called 
transifex live which automatically extracts and updates the string database 
in the correct way, all the translators have to do is fiddle around with 
the web interface to get used to it, and start translating and reviewing! :D

Transifex live demo:


   - https://www.transifex.com/live_demo
   

Transifex live documentation:


   - http://docs.transifex.com/live
   

You can see here the relevant issue and pull request for adding this into 
the Julia web page, they include lots of technical details about how this 
all works:


   - https://github.com/JuliaLang/julialang.github.com/issues/187
   - https://github.com/JuliaLang/julialang.github.com/pull/252
   
As always if you have any doubt, please don't hesitate to ask and I will 
try to answer all your questions.

Cheers,
Ismael Venegas Castelló

El jueves, 22 de septiembre de 2016, 4:55:41 (UTC-5), Patrick Kofod 
Mogensen escribió:
>
> How does this sync with the "original website"? I mean, what if something 
> changes on the original website?
>
> On Thursday, September 22, 2016 at 8:35:38 AM UTC+2, Ismael Venegas 
> Castelló wrote:
>>
>> I forgot, you guys can see the staging domain here:
>>
>>
>>- http://julialanges.github.io
>>
>> Please let me know what you think!
>>
>> El jueves, 22 de septiembre de 2016, 1:32:13 (UTC-5), Ismael Venegas 
>> Castelló escribió:
>>>
>>> Looking how to contribute to Julia? Check out the web translation 
>>> project on Transifex. 
>>> Help us bring Julia internationalization to your native language one 
>>> string at a time!
>>>
>>>
>>>- https://www.transifex.com/julialang-i18n/julialang-web
>>>- https://gitter.im/JuliaLangEs/julia-i18n
>>>- https://github.com/JuliaLang/julialang.github.com/pull/252
>>>
>>>

Re: [julia-users] Re: Adding publications easier

2016-09-22 Thread Stefan Karpinski
I didn't realize that there was a README that actually tells you to do all
of that. I've opened an issue
 since
there's no reason someone should have to install Jekyll in order to submit
a publication that uses Julia – that's way over the top.

On Thu, Sep 22, 2016 at 11:48 AM, Stefan Karpinski 
wrote:

> You don't need Jekyll installed to edit markdown files. You can even edit
> files directly in the web and preview the rendering. Admittedly, installing
> Jekyll is a pain and seems to have gotten worse over time somehow, but you
> don't need to do any of that to submit a publication.
>
> On Thu, Sep 22, 2016 at 8:08 AM, Magnus Röding 
> wrote:
>
>> I have now made a new attempt, ending up having everything installed and
>> seemingly working on an Ubuntu 16.04 system. My workflow so far is:
>>
>> - git clone https:// to local repo (I have a github account)
>> - edit julia.bib and _EDIT_ME_index.md according to instructions
>> - run 'make' in publications directory
>> - run 'bundle exec jekyll build' in main repo directory
>> - adding the modified files by 'git add *'
>>
>> So, off-topic question as far as Julia goes, but what to do now? I
>> realize I'm supposed to commit, annotate, request-pull, and go nuts, but in
>> which order?
>>
>> Thanks in advance, not my cup of tea this, so if anyone can help tx :-)
>>
>> Den tisdag 20 september 2016 kl. 21:14:21 UTC+2 skrev Tony Kelman:
>>
>>> What do you propose? Github is about as simple as we can do, considering
>>> also the complexity of maintaining something from thr project side. There
>>> are plenty of people around the community who are happy to walk you through
>>> the process of making a pull request, and if it's not explained in enoug
>>> detail then we can add more instructions if it would help. What have you
>>> tried so far?
>>
>>
>


Re: [julia-users] Re: Adding publications easier

2016-09-22 Thread Stefan Karpinski
You don't need Jekyll installed to edit markdown files. You can even edit
files directly in the web and preview the rendering. Admittedly, installing
Jekyll is a pain and seems to have gotten worse over time somehow, but you
don't need to do any of that to submit a publication.

On Thu, Sep 22, 2016 at 8:08 AM, Magnus Röding 
wrote:

> I have now made a new attempt, ending up having everything installed and
> seemingly working on an Ubuntu 16.04 system. My workflow so far is:
>
> - git clone https:// to local repo (I have a github account)
> - edit julia.bib and _EDIT_ME_index.md according to instructions
> - run 'make' in publications directory
> - run 'bundle exec jekyll build' in main repo directory
> - adding the modified files by 'git add *'
>
> So, off-topic question as far as Julia goes, but what to do now? I realize
> I'm supposed to commit, annotate, request-pull, and go nuts, but in which
> order?
>
> Thanks in advance, not my cup of tea this, so if anyone can help tx :-)
>
> Den tisdag 20 september 2016 kl. 21:14:21 UTC+2 skrev Tony Kelman:
>
>> What do you propose? Github is about as simple as we can do, considering
>> also the complexity of maintaining something from thr project side. There
>> are plenty of people around the community who are happy to walk you through
>> the process of making a pull request, and if it's not explained in enoug
>> detail then we can add more instructions if it would help. What have you
>> tried so far?
>
>


[julia-users] Julia's current state of web-programming

2016-09-22 Thread Alexey Cherkaev
Hi all,

What is the current state of web programming with Julia?

http://juliawebstack.org/ seems quite out of date (still suggesting to use 
Morsel, which is marked as deprecated).

Escher looks quite nice (https://github.com/shashi/Escher.jl), but still 
fails to build on both 0.4 and 0.5 versions (but it does seem to be 
actively developed).

Thanks!
Alexey



[julia-users] Implementing compilers or interpreters in Julia?

2016-09-22 Thread Sebastian the Eight
I'm curious how adept Julia is for writing compilers/interpreters? It's fairly 
easy to find example toy compilers for most other languages (C, OCaml, Ruby 
etc), but Julia seems to be lacking any examples except for its own. I was 
wondering if there are advantages or disadvantages for using Julia in this way, 
other than being a relatively new language

Re: [julia-users] Re: Visualizing a Julia AST (abstract syntax tree) as a tree

2016-09-22 Thread David P. Sanders


El jueves, 22 de septiembre de 2016, 10:29:11 (UTC-4), Tom Breloff escribió:
>
> Hi David this is very cool and useful!  I'd be happy to have it in 
> PlotRecipes if you don't find a better home for it.  If there's a generic 
> function to produce an adjacency list then we could use my graph recipes.  
> When I have the time (i.e. soonish when I really want nicer graphs) I plan 
> on incorporating the new GraphLayout into the recipes.  Let me know.
>

Hi Tom,

That sounds like a pretty good idea, though I have to say I haven't grokked 
PlotRecipes yet.

Since LightGraphs.jl doesn't support labelled graphs, maybe 

https://github.com/JuliaGraphs/Networks.jl

is the right solution? Do you have any suggestions about this?


 

>
> Tom
>
> On Thu, Sep 22, 2016 at 10:12 AM, David P. Sanders  > wrote:
>
>>
>>
>> 
>>
>> Here's an example of the output.
>>
>>
>> El miércoles, 21 de septiembre de 2016, 17:24:52 (UTC-4), David P. 
>> Sanders escribió:
>>>
>>> Hi,
>>>
>>> In case it's useful for anybody, the following notebook shows how to use 
>>> the LightGraphs and TikzGraphs packages
>>> to visualize a Julia abstract syntax tree (Expression object) as an 
>>> actual tree:
>>>
>>> https://gist.github.com/dpsanders/5cc1acff2471d27bc583916e00d43387
>>>
>>> Currently it requires the master branch of TikzGraphs.jl.
>>>
>>> It would be great to have some kind of Julia snippet repository for this 
>>> kind of thing that is less than a package but
>>> provides some kind of useful functionality. 
>>>
>>> David.
>>>
>>>
>>>
>

Re: [julia-users] Re: Visualizing a Julia AST (abstract syntax tree) as a tree

2016-09-22 Thread Tom Breloff
Hi David this is very cool and useful!  I'd be happy to have it in
PlotRecipes if you don't find a better home for it.  If there's a generic
function to produce an adjacency list then we could use my graph recipes.
When I have the time (i.e. soonish when I really want nicer graphs) I plan
on incorporating the new GraphLayout into the recipes.  Let me know.

Tom

On Thu, Sep 22, 2016 at 10:12 AM, David P. Sanders 
wrote:

>
>
> 
>
> Here's an example of the output.
>
>
> El miércoles, 21 de septiembre de 2016, 17:24:52 (UTC-4), David P. Sanders
> escribió:
>>
>> Hi,
>>
>> In case it's useful for anybody, the following notebook shows how to use
>> the LightGraphs and TikzGraphs packages
>> to visualize a Julia abstract syntax tree (Expression object) as an
>> actual tree:
>>
>> https://gist.github.com/dpsanders/5cc1acff2471d27bc583916e00d43387
>>
>> Currently it requires the master branch of TikzGraphs.jl.
>>
>> It would be great to have some kind of Julia snippet repository for this
>> kind of thing that is less than a package but
>> provides some kind of useful functionality.
>>
>> David.
>>
>>
>>


[julia-users] Re: Visualizing a Julia AST (abstract syntax tree) as a tree

2016-09-22 Thread David P. Sanders





Here's an example of the output.


El miércoles, 21 de septiembre de 2016, 17:24:52 (UTC-4), David P. Sanders 
escribió:
>
> Hi,
>
> In case it's useful for anybody, the following notebook shows how to use 
> the LightGraphs and TikzGraphs packages
> to visualize a Julia abstract syntax tree (Expression object) as an actual 
> tree:
>
> https://gist.github.com/dpsanders/5cc1acff2471d27bc583916e00d43387
>
> Currently it requires the master branch of TikzGraphs.jl.
>
> It would be great to have some kind of Julia snippet repository for this 
> kind of thing that is less than a package but
> provides some kind of useful functionality. 
>
> David.
>
>
>

[julia-users] Re: How to deal with methods redefinition warnings in 0.5?

2016-09-22 Thread Cedric St-Jean
I haven't implemented it yet in ClobberinReload.jl, but I suspect that 

include("file.jl"); IJulia.clear_output()

can provide some pain relief (at the cost of missing the meaningful 
warnings) if you're using IJulia.


On Tuesday, September 20, 2016 at 8:59:05 AM UTC-4, Matthieu wrote:
>
> +1. I also think they are useless and annoying during interactive sessions.
>
>  Monday, September 19, 2016 at 5:46:10 PM UTC-4, K leo wrote:
>>
>> To me, these methods redefinition warnings are pretty annoying - there 
>> are zillions of them.  Why should they be there?  Look, when I do a include 
>> again, I know I am overriding those methods in my file.  
>>
>> On Monday, September 12, 2016 at 9:50:27 PM UTC+8, K leo wrote:
>>>
>>> After calling workspace(), there are even a lot of warnings regarding 
>>> methods in packages.
>>>
>>>

[julia-users] Re: Maps in Julia?

2016-09-22 Thread 'Philippe Roy' via julia-users
Yeesian, I took the time to read the entire presentation. It does look very 
promising!  Thanks for such work, I'll certainly keep an eye on it.

Le mercredi 21 septembre 2016 08:37:27 UTC-4, Yeesian Ng a écrit :
>
> It is based on slow work-in-progress, but I have slides 
>  (and notebook 
> )
>  
> for a talk I'm giving later today on geospatial processing in julia. They 
> give some sense of what the API of plotting a map through 
> https://github.com/tbreloff/Plots.jl via 
> https://github.com/yeesian/GeoDataFrames.jl might look like in the future.
>
> On Tuesday, 20 September 2016 20:54:10 UTC-4, cdm wrote:
>>
>>
>> the julia-geo list may also be helpful:
>>
>> https://groups.google.com/forum/#!forum/julia-geo
>>
>

Re: [julia-users] Re: Debugging stack allocation

2016-09-22 Thread Yichao Yu
On Thu, Sep 22, 2016 at 8:44 AM, Jamie Brandon
 wrote:
> Well, I managed to answer the first part of my question. A variable won't be
> stack-allocated unless the compiler can prove it is always defined before
> being used. Mine were, but the control flow was too complex for the compiler
> to deal with. I added `local finger = Finger{1}(0,0)` in a few places to
> make life easier for the compiler and my allocations went away.
>
> I'm still interested in the second half of the question though - is there a
> better way to debug allocation problems other than guesswork?

For most cases code_warntype should be able to tell you that. The
use-before-assign is one of the few cases that's not captured by it.
(Other cases are mostly due to the compiler heuristics specializing on
types/functions). Ideally, use-before-assign issue should be fixed and
avoid having to box the variable.

>
> On 22 September 2016 at 14:06, Jamie Brandon 
> wrote:
>>
>> Oh, for comparison, this simper function contains no heap allocation at
>> all, so the compiler is definitely willing to do this under some
>> circumstances.
>>
>> function foo()
>>   finger = Finger{1}(1,1)
>>   for _ in 1:1000
>> finger = Finger{1}(finger.hi + finger.lo, finger.lo)
>>   end
>>   finger
>> end
>>
>> On 22 September 2016 at 14:01, Jamie Brandon
>>  wrote:
>>>
>>> I have a query compiler which emits Julia code. The code contains lots of
>>> calls to generated functions, which include sections like this:
>>>
>>> hi = gallop(column, column[finger.lo], finger.lo, finger.hi, <=)
>>> Finger{$(C+1)}(finger.lo, hi)
>>>
>>> Finger is an immutable, isbits type:
>>>
>>>   immutable Finger{C}
>>> lo::Int64
>>> hi::Int64
>>>   end
>>>
>>> When I run the generated code I see many millions of allocations. Using
>>> code_warntype I can see that all the generated functions have been inlined,
>>> every variable has a concrete inferred type and there are no generic calls.
>>> And yet I see many sections in the llvm code like this:
>>>
>>>   %235 = call %jl_value_t* @jl_gc_pool_alloc(i8* %ptls_i8, i32 1456, i32
>>> 32)
>>>   %236 = getelementptr inbounds %jl_value_t, %jl_value_t* %235, i64 -1,
>>> i32 0
>>>   store %jl_value_t* inttoptr (i64 139661604385936 to %jl_value_t*),
>>> %jl_value_t** %236, align 8
>>>   %237 = bitcast %jl_value_t* %235 to i64*
>>>   store i64 %229, i64* %237, align 8
>>>   %238 = getelementptr inbounds %jl_value_t, %jl_value_t* %235, i64 1
>>>   %239 = bitcast %jl_value_t* %238 to i64*
>>>   store i64 %234, i64* %239, align 8
>>>   store %jl_value_t* %235, %jl_value_t** %finger_2_2, align 8
>>>   %.pr = load %jl_value_t*, %jl_value_t** %finger_1_2, align 8
>>>
>>> The pointer on the third line is:
>>>
>>>   unsafe_pointer_to_objref(convert(Ptr{Any}, 139661604385936))
>>>   # => Data.Finger{1}
>>>
>>> So it appears that these fingers are still being heap-allocated.
>>>
>>> What could cause this? And more generally, how does one debug issues like
>>> this? Is there any way to introspect on the decision?
>>
>>
>


[julia-users] Re: Debugging stack allocation

2016-09-22 Thread Jamie Brandon
Well, I managed to answer the first part of my question. A variable won't
be stack-allocated unless the compiler can prove it is always defined
before being used. Mine were, but the control flow was too complex for the
compiler to deal with. I added `local finger = Finger{1}(0,0)` in a few
places to make life easier for the compiler and my allocations went away.

I'm still interested in the second half of the question though - is there a
better way to debug allocation problems other than guesswork?

On 22 September 2016 at 14:06, Jamie Brandon 
wrote:

> Oh, for comparison, this simper function contains no heap allocation at
> all, so the compiler is definitely willing to do this under some
> circumstances.
>
> function foo()
>   finger = Finger{1}(1,1)
>   for _ in 1:1000
> finger = Finger{1}(finger.hi + finger.lo, finger.lo)
>   end
>   finger
> end
>
> On 22 September 2016 at 14:01, Jamie Brandon  > wrote:
>
>> I have a query compiler which emits Julia code. The code contains lots of
>> calls to generated functions, which include sections like this:
>>
>> hi = gallop(column, column[finger.lo], finger.lo, finger.hi, <=)
>> Finger{$(C+1)}(finger.lo, hi)
>>
>> Finger is an immutable, isbits type:
>>
>>   immutable Finger{C}
>> lo::Int64
>> hi::Int64
>>   end
>>
>> When I run the generated code I see many millions of allocations. Using
>> code_warntype I can see that all the generated functions have been inlined,
>> every variable has a concrete inferred type and there are no generic calls.
>> And yet I see many sections in the llvm code like this:
>>
>>   %235 = call %jl_value_t* @jl_gc_pool_alloc(i8* %ptls_i8, i32 1456, i32
>> 32)
>>   %236 = getelementptr inbounds %jl_value_t, %jl_value_t* %235, i64 -1,
>> i32 0
>>   store %jl_value_t* inttoptr (i64 139661604385936 to %jl_value_t*),
>> %jl_value_t** %236, align 8
>>   %237 = bitcast %jl_value_t* %235 to i64*
>>   store i64 %229, i64* %237, align 8
>>   %238 = getelementptr inbounds %jl_value_t, %jl_value_t* %235, i64 1
>>   %239 = bitcast %jl_value_t* %238 to i64*
>>   store i64 %234, i64* %239, align 8
>>   store %jl_value_t* %235, %jl_value_t** %finger_2_2, align 8
>>   %.pr = load %jl_value_t*, %jl_value_t** %finger_1_2, align 8
>>
>> The pointer on the third line is:
>>
>>   unsafe_pointer_to_objref(convert(Ptr{Any}, 139661604385936))
>>   # => Data.Finger{1}
>>
>> So it appears that these fingers are still being heap-allocated.
>>
>> What could cause this? And more generally, how does one debug issues like
>> this? Is there any way to introspect on the decision?
>>
>
>


[julia-users] Re: Adding publications easier

2016-09-22 Thread Magnus Röding
I have now made a new attempt, ending up having everything installed and 
seemingly working on an Ubuntu 16.04 system. My workflow so far is:

- git clone https:// to local repo (I have a github account)
- edit julia.bib and _EDIT_ME_index.md according to instructions
- run 'make' in publications directory
- run 'bundle exec jekyll build' in main repo directory
- adding the modified files by 'git add *'

So, off-topic question as far as Julia goes, but what to do now? I realize 
I'm supposed to commit, annotate, request-pull, and go nuts, but in which 
order?

Thanks in advance, not my cup of tea this, so if anyone can help tx :-)

Den tisdag 20 september 2016 kl. 21:14:21 UTC+2 skrev Tony Kelman:

> What do you propose? Github is about as simple as we can do, considering 
> also the complexity of maintaining something from thr project side. There 
> are plenty of people around the community who are happy to walk you through 
> the process of making a pull request, and if it's not explained in enoug 
> detail then we can add more instructions if it would help. What have you 
> tried so far?



[julia-users] Re: Debugging stack allocation

2016-09-22 Thread Jamie Brandon
Oh, for comparison, this simper function contains no heap allocation at
all, so the compiler is definitely willing to do this under some
circumstances.

function foo()
  finger = Finger{1}(1,1)
  for _ in 1:1000
finger = Finger{1}(finger.hi + finger.lo, finger.lo)
  end
  finger
end

On 22 September 2016 at 14:01, Jamie Brandon 
wrote:

> I have a query compiler which emits Julia code. The code contains lots of
> calls to generated functions, which include sections like this:
>
> hi = gallop(column, column[finger.lo], finger.lo, finger.hi, <=)
> Finger{$(C+1)}(finger.lo, hi)
>
> Finger is an immutable, isbits type:
>
>   immutable Finger{C}
> lo::Int64
> hi::Int64
>   end
>
> When I run the generated code I see many millions of allocations. Using
> code_warntype I can see that all the generated functions have been inlined,
> every variable has a concrete inferred type and there are no generic calls.
> And yet I see many sections in the llvm code like this:
>
>   %235 = call %jl_value_t* @jl_gc_pool_alloc(i8* %ptls_i8, i32 1456, i32
> 32)
>   %236 = getelementptr inbounds %jl_value_t, %jl_value_t* %235, i64 -1,
> i32 0
>   store %jl_value_t* inttoptr (i64 139661604385936 to %jl_value_t*),
> %jl_value_t** %236, align 8
>   %237 = bitcast %jl_value_t* %235 to i64*
>   store i64 %229, i64* %237, align 8
>   %238 = getelementptr inbounds %jl_value_t, %jl_value_t* %235, i64 1
>   %239 = bitcast %jl_value_t* %238 to i64*
>   store i64 %234, i64* %239, align 8
>   store %jl_value_t* %235, %jl_value_t** %finger_2_2, align 8
>   %.pr = load %jl_value_t*, %jl_value_t** %finger_1_2, align 8
>
> The pointer on the third line is:
>
>   unsafe_pointer_to_objref(convert(Ptr{Any}, 139661604385936))
>   # => Data.Finger{1}
>
> So it appears that these fingers are still being heap-allocated.
>
> What could cause this? And more generally, how does one debug issues like
> this? Is there any way to introspect on the decision?
>


[julia-users] Debugging stack allocation

2016-09-22 Thread Jamie Brandon
I have a query compiler which emits Julia code. The code contains lots of
calls to generated functions, which include sections like this:

hi = gallop(column, column[finger.lo], finger.lo, finger.hi, <=)
Finger{$(C+1)}(finger.lo, hi)

Finger is an immutable, isbits type:

  immutable Finger{C}
lo::Int64
hi::Int64
  end

When I run the generated code I see many millions of allocations. Using
code_warntype I can see that all the generated functions have been inlined,
every variable has a concrete inferred type and there are no generic calls.
And yet I see many sections in the llvm code like this:

  %235 = call %jl_value_t* @jl_gc_pool_alloc(i8* %ptls_i8, i32 1456, i32 32)
  %236 = getelementptr inbounds %jl_value_t, %jl_value_t* %235, i64 -1, i32
0
  store %jl_value_t* inttoptr (i64 139661604385936 to %jl_value_t*),
%jl_value_t** %236, align 8
  %237 = bitcast %jl_value_t* %235 to i64*
  store i64 %229, i64* %237, align 8
  %238 = getelementptr inbounds %jl_value_t, %jl_value_t* %235, i64 1
  %239 = bitcast %jl_value_t* %238 to i64*
  store i64 %234, i64* %239, align 8
  store %jl_value_t* %235, %jl_value_t** %finger_2_2, align 8
  %.pr = load %jl_value_t*, %jl_value_t** %finger_1_2, align 8

The pointer on the third line is:

  unsafe_pointer_to_objref(convert(Ptr{Any}, 139661604385936))
  # => Data.Finger{1}

So it appears that these fingers are still being heap-allocated.

What could cause this? And more generally, how does one debug issues like
this? Is there any way to introspect on the decision?


Re: [julia-users] how to get rid of axis in Plots ?

2016-09-22 Thread Tom Breloff
This isn't configurable right now, but I've wanted the same thing. Can you
open an issue and I'll add when I have a couple minutes? I think the
plotlyjs backend does this by default though.

On Thursday, September 22, 2016, Roger Herikstad 
wrote:

> Hey,
>  Similar question; how do I remove only the upper and right border? My
> personal preference is to only show the left and bottom border. In Winston,
> I'd do this
>
> p = Winston.plot([1,2,3],[3,4,5])
> Winston.setattr(p.frame, "draw_axis", false)
>
> Thanks!
>
>
> On Wednesday, June 29, 2016 at 2:15:47 AM UTC+8, Henri Girard wrote:
>>
>> I just found in the same time ... Thank you :)
>>
>> Le mardi 28 juin 2016 17:48:34 UTC+2, Henri Girard a écrit :
>>>
>>> I don't see any command to get rid of axis in Plots or hide them
>>> I tried axis=false axes =false idem with none but nothing works...
>>>
>>>


[julia-users] Re: julia-i18n: Translators and reviewer needed!

2016-09-22 Thread Patrick Kofod Mogensen
How does this sync with the "original website"? I mean, what if something 
changes on the original website?

On Thursday, September 22, 2016 at 8:35:38 AM UTC+2, Ismael Venegas 
Castelló wrote:
>
> I forgot, you guys can see the staging domain here:
>
>
>- http://julialanges.github.io
>
> Please let me know what you think!
>
> El jueves, 22 de septiembre de 2016, 1:32:13 (UTC-5), Ismael Venegas 
> Castelló escribió:
>>
>> Looking how to contribute to Julia? Check out the web translation 
>> project on Transifex. 
>> Help us bring Julia internationalization to your native language one 
>> string at a time!
>>
>>
>>- https://www.transifex.com/julialang-i18n/julialang-web
>>- https://gitter.im/JuliaLangEs/julia-i18n
>>- https://github.com/JuliaLang/julialang.github.com/pull/252
>>
>>

[julia-users] Julia v0.5.0 always killed on a memory limited cluster

2016-09-22 Thread Alan Crawford
I am using Julia v0.5.0 on a memory limited SGE cluster. In particular,  to 
submit jobs on the cluster, both h_vmem and tmem resource flags need to be 
passed to in qsub command. However, all of Julia jobs keep on being killed 
because workers seem to be very hungry for virtual memory and ask for it 
outside of the SGE scheduler. 

A more detailed description of the problem can be found in the last post in 
this thread 
. 
It seems that the best workaround (listed on this thread 
) is to change  #define 
REGION_PG_COUNT 16*8*4096 to #define REGION_PG_COUNT 8*4096 in 
src/gc-pages.c when compiling Julia. 

However, I am just a user of the cluster and don't have permission to 
compile Julia on it. Nor do i have access to machinery to compile my own 
binary . As such I am wondering if there could be a more user friendly 
option?

Two ideas come to mind:

1) Offer a low virtual memory for Julia binaries; or
2) Add a flag when opening julia flag that lets you set K where K is 
defined as REGION_PG_COUNT K*8*4096 ... or some other equivalent...

In particular, option 2 would really be addition to the language and 
greatly aid users - especially those without capacity to compile julia. If, 
for some reason, this is undesirable, perhaps option 1 would present the 
best short term fix?



[julia-users] Re: how to get rid of axis in Plots ?

2016-09-22 Thread Roger Herikstad
Hey,
 Similar question; how do I remove only the upper and right border? My 
personal preference is to only show the left and bottom border. In Winston, 
I'd do this

p = Winston.plot([1,2,3],[3,4,5])
Winston.setattr(p.frame, "draw_axis", false)
 
Thanks!


On Wednesday, June 29, 2016 at 2:15:47 AM UTC+8, Henri Girard wrote:
>
> I just found in the same time ... Thank you :)
>
> Le mardi 28 juin 2016 17:48:34 UTC+2, Henri Girard a écrit :
>>
>> I don't see any command to get rid of axis in Plots or hide them
>> I tried axis=false axes =false idem with none but nothing works...
>>
>>

[julia-users] Re: Plotting lots of data

2016-09-22 Thread CrocoDuck O'Ducks
Hi!

Thank you for your package. Yes, I managed to get my plots working with GR. 
For some reason now it is plotting all the lines, I don't get the weird 
white bands anymore.

On Wednesday, 21 September 2016 12:52:43 UTC+1, Igor wrote:
>
> Hello!
> did you managed to plot big data sets? You can try to use my small package 
> for this (  https://github.com/ig-or/qwtwplot.jl ) - it's very 
> interesting for me how it can handle big data sets.
>
> Best regards, Igor
>
>
> четверг, 16 июня 2016 г., 0:08:42 UTC+3 пользователь CrocoDuck O'Ducks 
> написал:
>>
>>
>> 
>>
>>
>> 
>> Hi, thank you very much, really appreciated. GR seems pretty much what I 
>> need. I like I can use Plots.jl with it. PlotlyJS.jl is very hot, I guess I 
>> will use it when I need interactivity. I will look into OpenGL related 
>> visualization tools for more advanced plots/renders.
>>
>> I just have a quick question. I just did a quick test with GR plotting 
>> two 1 second long sine waves sampled at 192 kHz, one of frequency 100 Hz 
>> and one of frequency 10 kHz. The 100 Hz looks fine but the 10 kHz plot has 
>> blank areas (see attached pictures). I guess it is due to the density of 
>> lines... probably solved by making the plot bigger?
>>
>>

[julia-users] Re: Maps in Julia?

2016-09-22 Thread Michael Borregaard

Yeesian, this looks really promising! It will be great to follow the 
progress of this.


Re: [julia-users] Re: Plotting lots of data

2016-09-22 Thread Igor _
Chris, thank you for the information!!

21 сент. 2016 г. 23:07 пользователь "Chris Rackauckas" 
написал:

> Usually I'm plotting the run from really long differential equations
> solution. The one I am mentioning is from a really long stochastic
> differential equation solution (publication coming soon). 19 lines with
> likely millions of dots, thrown together into one figure or spanning
> multiple. I can't really explain "faster" other than, when I ran the plot
> command afterwards (on smaller test cases) PyPlot would take forever but GR
> would get the plot done much quicker, so for the longer run I went with GR
> and it worked. I am not much of a plot guy so my method is, use Plots.jl,
> switch backends to find something that works, and if I can't find an easy
> solution like this, go ask Tom :). What I am saying is, if you do some
> experiments, GR will plot faster than something like Gadfly, PyPlot,
> (Plotly gave issues, this was in June so it may no longer be present) etc.,
> so my hint is to give the GR backend a try if you're ever in a similar case.
>
> On Wednesday, September 21, 2016 at 11:54:11 AM UTC-7, Andreas Lobinger
> wrote:
>>
>> Hello colleague,
>>
>> On Wednesday, September 21, 2016 at 8:34:21 PM UTC+2, Chris Rackauckas
>> wrote:
>>>
>>> I've managed to plot quite a few large datasets. GR through Plots.jl
>>> works very well for this. I tend to still prefer the defaults of PyPlot,
>>> but GR is just so much faster that I switch the backend whenever the amount
>>> of data gets unruly (larger than like 5-10GB, and it's worked to save a
>>> raster image from data larger than 40-50 GB). Plots + GR is a good combo
>>>
>>
>> Could you explain this in more length, especially the 'faster'? It sounds
>> like your plotting a few hundred million items/lines.
>>
>


[julia-users] Re: julia-i18n: Translators and reviewer needed!

2016-09-22 Thread Ismael Venegas Castelló
I forgot, you guys can see the staging domain here:


   - http://julialanges.github.io
   
Please let me know what you think!

El jueves, 22 de septiembre de 2016, 1:32:13 (UTC-5), Ismael Venegas 
Castelló escribió:
>
> Looking how to contribute to Julia? Check out the web translation project 
> on Transifex. 
> Help us bring Julia internationalization to your native language one 
> string at a time!
>
>
>- https://www.transifex.com/julialang-i18n/julialang-web
>- https://gitter.im/JuliaLangEs/julia-i18n
>- https://github.com/JuliaLang/julialang.github.com/pull/252
>
>