Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Yichao Yu
On Sat, Apr 2, 2016 at 10:53 PM, Cedric St-Jean  wrote:
> That's actually a compiler bug, nice!
>
> abstract Feature
>
> type A <: Feature end
> evaluate(f::A) = 1.0
>
> foo(features::Vector{Feature}) = isa(features[1], A) ?
> evaluate(features[1]::A) : evaluate(features[1])
>
> @show foo(Feature[A()])
>
> type C <: Feature end
> evaluate(f::C) = 100
>
> @show foo(Feature[C()])
>
> yields
>
> foo(Feature[A()]) = 1.0
>
> foo(Feature[C()]) = 4.94e-322
>
>
> That explains why performance was the same on your computer: the compiler
> was making an incorrect assumption about the return type of `evaluate`. Or
> maybe it's an intentional gamble by the Julia devs, for the sake of
> performance.
>
> I couldn't find any issue describing this. Yichao?

This is effectively #265. It's not always predictable what assumption
the compiler makes now...

>
>
> On Saturday, April 2, 2016 at 10:16:59 PM UTC-4, Greg Plowman wrote:
>>
>> Thanks Cedric and Yichao.
>>
>> This makes sense that there might be new subtypes and associated
>> specialised methods. I understand that now. Thanks.
>>
>> On my machine (v0.4.5 Windows), fast() and pretty_fast() seem to run in
>> similar time.
>> So I looked as @code_warntype as Yichao suggested and get the following.
>> I don't fully know how to interpret output, but return type from the final
>> "catchall" evaluate() seems to be inferred/asserted as Float64 (see
>> highlighted yellow line below)
>>
>> Would this explain why pretty_fast() seems to be as efficient as fast()?
>>
>> Why is the return type being inferred asserted as Float64?
>>
>>
>> julia> @code_warntype fast(features)
>> Variables:
>>   features::Array{Feature,1}
>>   retval::Float64
>>   #s1::Int64
>>   i::Int64
>>
>> Body:
>>   begin  # none, line 2:
>>   retval = 0.0 # none, line 3:
>>   GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
>>   GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1,
>> :(((top(getfield))(Base.Intrinsics,:select_value)::I)((Base.sle_int)(1,GenSym(2))::Bool,GenSym(2),(Base.box)
>> (Int64,(Base.sub_int)(1,1)))::Int64)))
>>   #s1 = (top(getfield))(GenSym(0),:start)::Int64
>>   unless (Base.box)(Base.Bool,(Base.not_int)(#s1::Int64 ===
>> (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
>> goto 1
>>   2:
>>   GenSym(3) = #s1::Int64
>>   GenSym(4) = (Base.box)(Base.Int,(Base.add_int)(#s1::Int64,1))
>>   i = GenSym(3)
>>   #s1 = GenSym(4) # none, line 4:
>>   unless
>> (Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.A)::Bool
>> goto 4 # none, line 5:
>>   retval =
>> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,1.0))
>>   goto 5
>>   4:  # none, line 7:
>>   retval =
>> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,0.0))
>>   5:
>>   3:
>>   unless
>> (Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.not_int)(#s1::Int64
>> === (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0)
>> ,:stop)::Int64,1))::Bool goto 2
>>   1:
>>   0:  # none, line 10:
>>   return retval::Float64
>>   end::Float64
>>
>>
>> julia> @code_warntype pretty_fast(features)
>> Variables:
>>   features::Array{Feature,1}
>>   retval::Float64
>>   #s1::Int64
>>   i::Int64
>>
>> Body:
>>   begin  # none, line 2:
>>   retval = 0.0 # none, line 3:
>>   GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
>>   GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1,
>> :(((top(getfield))(Base.Intrinsics,:select_value)::I)((Base.sle_int)(1,GenSym(2))::Bool,GenSym(2),(Base.box)
>> (Int64,(Base.sub_int)(1,1)))::Int64)))
>>   #s1 = (top(getfield))(GenSym(0),:start)::Int64
>>   unless (Base.box)(Base.Bool,(Base.not_int)(#s1::Int64 ===
>> (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
>> goto 1
>>   2:
>>   GenSym(4) = #s1::Int64
>>   GenSym(5) = (Base.box)(Base.Int,(Base.add_int)(#s1::Int64,1))
>>   i = GenSym(4)
>>   #s1 = GenSym(5) # none, line 4:
>>   unless
>> (Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.A)::Bool
>> goto 4 # none, line 5:
>>   retval =
>> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,1.0))
>>   goto 6
>>   4:  # none, line 6:
>>   unless
>> (Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.B)::Bool
>> goto 5 # none, line 7:
>>   retval =
>> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,0.0))
>>   goto 6
>>   5:  # none, line 9:
>>   GenSym(3) =
>> (Main.evaluate)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature)::Float64
>>   retval =
>> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,GenSym(3)))
>>   6:
>>   3:
>>   unless
>> (Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.not_int)(#s1::Int64
>> === (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0)
>> ,:stop)::Int6

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Cedric St-Jean
That's actually a compiler bug, nice!

abstract Feature

type A <: Feature end
evaluate(f::A) = 1.0

foo(features::Vector{Feature}) = isa(features[1], A) ? 
evaluate(features[1]::A) : evaluate(features[1])

@show foo(Feature[A()])

type C <: Feature end
evaluate(f::C) = 100

@show foo(Feature[C()])

yields

foo(Feature[A()]) = 1.0

foo(Feature[C()]) = 4.94e-322


That explains why performance was the same on your computer: the compiler 
was making an incorrect assumption about the return type of `evaluate`. Or 
maybe it's an intentional gamble by the Julia devs, for the sake of 
performance.

I couldn't find any issue describing this. Yichao?

On Saturday, April 2, 2016 at 10:16:59 PM UTC-4, Greg Plowman wrote:
>
> Thanks Cedric and Yichao.
>
> This makes sense that there might be new subtypes and associated 
> specialised methods. I understand that now. Thanks.
>
> On my machine (v0.4.5 Windows), fast() and pretty_fast() seem to run in 
> similar time.
> So I looked as @code_warntype as Yichao suggested and get the following.
> I don't fully know how to interpret output, but return type from the final 
> "catchall" evaluate() seems to be inferred/asserted as Float64 (see 
> highlighted yellow line below)
>
> Would this explain why pretty_fast() seems to be as efficient as fast()?
>
> Why is the return type being inferred asserted as Float64?
>
>
> julia> @code_warntype fast(features)
> Variables:
>   features::Array{Feature,1}
>   retval::Float64
>   #s1::Int64
>   i::Int64
>
> Body:
>   begin  # none, line 2:
>   retval = 0.0 # none, line 3:
>   GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
>   GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1, 
> :(((top(getfield))(Base.Intrinsics,:select_value)::I)((Base.sle_int)(1,GenSym(2))::Bool,GenSym(2),(Base.box)
> (Int64,(Base.sub_int)(1,1)))::Int64)))
>   #s1 = (top(getfield))(GenSym(0),:start)::Int64
>   unless (Base.box)(Base.Bool,(Base.not_int)(#s1::Int64 === 
> (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
>  
> goto 1
>   2:
>   GenSym(3) = #s1::Int64
>   GenSym(4) = (Base.box)(Base.Int,(Base.add_int)(#s1::Int64,1))
>   i = GenSym(3)
>   #s1 = GenSym(4) # none, line 4:
>   unless 
> (Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.A)::Bool
>  
> goto 4 # none, line 5:
>   retval = 
> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,1.0))
>   goto 5
>   4:  # none, line 7:
>   retval = 
> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,0.0))
>   5:
>   3:
>   unless 
> (Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.not_int)(#s1::Int64
>  
> === (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0)
> ,:stop)::Int64,1))::Bool goto 2
>   1:
>   0:  # none, line 10:
>   return retval::Float64
>   end::Float64
>
>
> julia> @code_warntype pretty_fast(features)
> Variables:
>   features::Array{Feature,1}
>   retval::Float64
>   #s1::Int64
>   i::Int64
>
> Body:
>   begin  # none, line 2:
>   retval = 0.0 # none, line 3:
>   GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
>   GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1, 
> :(((top(getfield))(Base.Intrinsics,:select_value)::I)((Base.sle_int)(1,GenSym(2))::Bool,GenSym(2),(Base.box)
> (Int64,(Base.sub_int)(1,1)))::Int64)))
>   #s1 = (top(getfield))(GenSym(0),:start)::Int64
>   unless (Base.box)(Base.Bool,(Base.not_int)(#s1::Int64 === 
> (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
>  
> goto 1
>   2:
>   GenSym(4) = #s1::Int64
>   GenSym(5) = (Base.box)(Base.Int,(Base.add_int)(#s1::Int64,1))
>   i = GenSym(4)
>   #s1 = GenSym(5) # none, line 4:
>   unless 
> (Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.A)::Bool
>  
> goto 4 # none, line 5:
>   retval = 
> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,1.0))
>   goto 6
>   4:  # none, line 6:
>   unless 
> (Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.B)::Bool
>  
> goto 5 # none, line 7:
>   retval = 
> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,0.0))
>   goto 6
>   5:  # none, line 9:
>   GenSym(3) = 
> (Main.evaluate)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature)::Float64
>   retval = 
> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,GenSym(3)))
>   6:
>   3:
>   unless 
> (Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.not_int)(#s1::Int64
>  
> === (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0)
> ,:stop)::Int64,1))::Bool goto 2
>   1:
>   0:  # none, line 12:
>   return retval::Float64
>   end::Float64
>
> Julia>
>
>
>
> On Sunday, April 3, 2016 at 8:04:35 AM UTC+10, Cedric St-Jean wrote:
>
>>
>>
>> On Saturday, April 2, 2016 at 5:39:45 PM UTC-4, Greg Pl

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread 'Greg Plowman' via julia-users
Thanks Cedric and Yichao.

This makes sense that there might be new subtypes and associated 
specialised methods. I understand that now. Thanks.

On my machine (v0.4.5 Windows), fast() and pretty_fast() seem to run in 
similar time.
So I looked as @code_warntype as Yichao suggested and get the following.
I don't fully know how to interpret output, but return type from the final 
"catchall" evaluate() seems to be inferred/asserted as Float64 (see 
highlighted yellow line below)

Would this explain why pretty_fast() seems to be as efficient as fast()?

Why is the return type being inferred asserted as Float64?


julia> @code_warntype fast(features)
Variables:
  features::Array{Feature,1}
  retval::Float64
  #s1::Int64
  i::Int64

Body:
  begin  # none, line 2:
  retval = 0.0 # none, line 3:
  GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
  GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1, 
:(((top(getfield))(Base.Intrinsics,:select_value)::I)((Base.sle_int)(1,GenSym(2))::Bool,GenSym(2),(Base.box)
(Int64,(Base.sub_int)(1,1)))::Int64)))
  #s1 = (top(getfield))(GenSym(0),:start)::Int64
  unless (Base.box)(Base.Bool,(Base.not_int)(#s1::Int64 === 
(Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
 
goto 1
  2:
  GenSym(3) = #s1::Int64
  GenSym(4) = (Base.box)(Base.Int,(Base.add_int)(#s1::Int64,1))
  i = GenSym(3)
  #s1 = GenSym(4) # none, line 4:
  unless 
(Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.A)::Bool
 
goto 4 # none, line 5:
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,1.0))
  goto 5
  4:  # none, line 7:
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,0.0))
  5:
  3:
  unless 
(Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.not_int)(#s1::Int64
 
=== (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0)
,:stop)::Int64,1))::Bool goto 2
  1:
  0:  # none, line 10:
  return retval::Float64
  end::Float64


julia> @code_warntype pretty_fast(features)
Variables:
  features::Array{Feature,1}
  retval::Float64
  #s1::Int64
  i::Int64

Body:
  begin  # none, line 2:
  retval = 0.0 # none, line 3:
  GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
  GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1, 
:(((top(getfield))(Base.Intrinsics,:select_value)::I)((Base.sle_int)(1,GenSym(2))::Bool,GenSym(2),(Base.box)
(Int64,(Base.sub_int)(1,1)))::Int64)))
  #s1 = (top(getfield))(GenSym(0),:start)::Int64
  unless (Base.box)(Base.Bool,(Base.not_int)(#s1::Int64 === 
(Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
 
goto 1
  2:
  GenSym(4) = #s1::Int64
  GenSym(5) = (Base.box)(Base.Int,(Base.add_int)(#s1::Int64,1))
  i = GenSym(4)
  #s1 = GenSym(5) # none, line 4:
  unless 
(Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.A)::Bool
 
goto 4 # none, line 5:
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,1.0))
  goto 6
  4:  # none, line 6:
  unless 
(Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.B)::Bool
 
goto 5 # none, line 7:
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,0.0))
  goto 6
  5:  # none, line 9:
  GenSym(3) = 
(Main.evaluate)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature)::Float64
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,GenSym(3)))
  6:
  3:
  unless 
(Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.not_int)(#s1::Int64
 
=== (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0)
,:stop)::Int64,1))::Bool goto 2
  1:
  0:  # none, line 12:
  return retval::Float64
  end::Float64

Julia>



On Sunday, April 3, 2016 at 8:04:35 AM UTC+10, Cedric St-Jean wrote:

>
>
> On Saturday, April 2, 2016 at 5:39:45 PM UTC-4, Greg Plowman wrote:
>>
>> Cedric,
>> On my machine fast() and pretty_fast() run in the roughly the same time.
>> Are you sure pre-compiled first?
>>
>
> Yep. I'm using @benchmark on Julia 0.4.5 on OSX, and the time difference 
> is definitely > 2X. 
>  
>
>>  
>> 1. If you add a default fallback method, say, evaluate(f::Feature) = -1.0
>> Theoretically, would that inform the compiler that evaluate() always 
>> returns a Float64 and therefore is type stable?
>>
>
> No, because I might write 
>
> type C <: Feature
> end
> evaluate(f::C) = 100
>
> and a Vector{Feature} might contain a C. Abstract types are open-ended, 
> new types can be created a runtime, so the compiler can't assume anything 
> about them, which is why they're not useful for performance. See here 
> 
> .
>
>  
>
>> Does dynamic dispatch mean compiler has to look up method at run time 
>> (beca

[julia-users] Re: Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-02 Thread Scott Jones
Oops, somehow my reply got sent before I finished it!
About the division: I do think that Julia is not consistent with the / 
operator.  Other operators almost always return something of the same type. 
 While there is the Unicode character ÷ operator, which does integer 
division, that's a pain to type (5 keystrokes vs 1).
I do think it would be better if / used the same promotion rules as *.
When dividing by a constant where you want to ensure a floating point 
result, I don't think it's so difficult to add a . to the numeric literal, 
instead of causing the confusion and/or extra keystrokes that the current 
definition causes.
Also, I don't think there is anything about the ÷ character that screams 
out "integer division" vs. "floating point division".
(Is that a convention in any other language, that I'm just not aware of?)

On Saturday, April 2, 2016 at 7:55:55 AM UTC-4, Spiritus Pap wrote:
>
> Hi there,
>
> TL;DR: A lot of people that could use julia (researchers currently using 
> python) won't. I give an example of how it makes my life hard, and I try to 
> suggest solutions.
>
> Iv'e been introduced to julia about a month ago.
> I'm an algorithmic researcher, I write mostly research code: statistics, 
> image processing, algorithms, etc.
> I use mostly python with numpy for my stuff, and C/++ when I need 
> performance.
> I was really happy when I heard of julia, because it takes the simplicity 
> of python and combines it with JIT compilation for speed!
>
> I REALLY tried to use julia, I really did. I tried to convince my friends 
> at work to use it too.
> However, there are a few things that make it unusable the way it is.
>
> Decisions that in my opinion were made by people who do not write 
> research-code:
> 1. Indexing in Julia. being 1 based and inclusive, instead of 0 based and 
> not including the end (like in c/python/lots of other stuff)
> 2. No simple integer-division operator.
>
> A simple example why it makes my *life hard*: Assume there is an array of 
> size 100, and i want to take the i_th portion of it out of n. This is a 
> common scenario for research-code, at least for me and my friends.
> In python:
> a[i*100/n:(i+1)*100/n]
> In julia:
> a[1+div(i*100,n):div((i+1)*100,n)]
>
> A lot more cumbersome in julia, and it is annoying and unattractive. This 
> is just a simple example.
>
> *Possible solutions:*
> The reason I'm writing this post is because I want to use julia, and I 
> want to to become great.
> *About the division:* I would suggest *adding *an integer division 
> *operator*, such as *//*. Would help a lot. Yes, I think it should be by 
> default, so that newcomers would need the least amount of effort to use 
> julia comfortably.
>
> *About the indexing:* I realize that this is a decision made a long time 
> ago, and everything is built this way. Yes, it is like matlab, and no, it 
> is not a good thing.
> I am a mathematician, and I almost always index my sequences expressions 
> in 0, it usually just makes more sense.
> The problem is both in array (e.g. a[0]) and in slice (e.g. 0:10).
> An array could be solved perhaps by a *custom *0 based *array object*. 
> But the slice? Maybe adding a 0 based *slice operator*(such as .. or _)? 
> is it possible to do so in a library?
>
> I'd be happy to write these myself, but I believe these need to be in the 
> standard library. Again, so that newcomers would need the least amount of 
> effort to use julia comfortably.
> If you have better suggestions, I'd be happy to hear.
>
> Thank you for your time
>


[julia-users] Re: Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-02 Thread Scott Jones
I feel your pain, those two items have been a pain for me and my 
colleagues, however, I think Julia is such a flexible language that I have 
high hope that both 0-based vs. 1-based indexing and the difference between 
row-major vs. column-major can be handled
in a generic fashion, and as Tony has pointed out, there has been a lot of 
discussion about just how to handle those issues in a clean AND performant 
fashion.

About division, although yes, there is the \div

On Saturday, April 2, 2016 at 7:55:55 AM UTC-4, Spiritus Pap wrote:
>
> Hi there,
>
> TL;DR: A lot of people that could use julia (researchers currently using 
> python) won't. I give an example of how it makes my life hard, and I try to 
> suggest solutions.
>
> Iv'e been introduced to julia about a month ago.
> I'm an algorithmic researcher, I write mostly research code: statistics, 
> image processing, algorithms, etc.
> I use mostly python with numpy for my stuff, and C/++ when I need 
> performance.
> I was really happy when I heard of julia, because it takes the simplicity 
> of python and combines it with JIT compilation for speed!
>
> I REALLY tried to use julia, I really did. I tried to convince my friends 
> at work to use it too.
> However, there are a few things that make it unusable the way it is.
>
> Decisions that in my opinion were made by people who do not write 
> research-code:
> 1. Indexing in Julia. being 1 based and inclusive, instead of 0 based and 
> not including the end (like in c/python/lots of other stuff)
> 2. No simple integer-division operator.
>
> A simple example why it makes my *life hard*: Assume there is an array of 
> size 100, and i want to take the i_th portion of it out of n. This is a 
> common scenario for research-code, at least for me and my friends.
> In python:
> a[i*100/n:(i+1)*100/n]
> In julia:
> a[1+div(i*100,n):div((i+1)*100,n)]
>
> A lot more cumbersome in julia, and it is annoying and unattractive. This 
> is just a simple example.
>
> *Possible solutions:*
> The reason I'm writing this post is because I want to use julia, and I 
> want to to become great.
> *About the division:* I would suggest *adding *an integer division 
> *operator*, such as *//*. Would help a lot. Yes, I think it should be by 
> default, so that newcomers would need the least amount of effort to use 
> julia comfortably.
>
> *About the indexing:* I realize that this is a decision made a long time 
> ago, and everything is built this way. Yes, it is like matlab, and no, it 
> is not a good thing.
> I am a mathematician, and I almost always index my sequences expressions 
> in 0, it usually just makes more sense.
> The problem is both in array (e.g. a[0]) and in slice (e.g. 0:10).
> An array could be solved perhaps by a *custom *0 based *array object*. 
> But the slice? Maybe adding a 0 based *slice operator*(such as .. or _)? 
> is it possible to do so in a library?
>
> I'd be happy to write these myself, but I believe these need to be in the 
> standard library. Again, so that newcomers would need the least amount of 
> effort to use julia comfortably.
> If you have better suggestions, I'd be happy to hear.
>
> Thank you for your time
>


Re: [julia-users] Fast coroutines

2016-04-02 Thread Yichao Yu
On Sat, Apr 2, 2016 at 7:23 PM, Glen H  wrote:
> Hi,
>
> Thanks everyone for the helpful comments.  If this isn't the way to go then
> what do you recommend for a good tradeoff between fast and easy to write
> iterations?  I have tried iterating with using the iteration protocol
> (start(), next(), done()) and it is very fast but was looking for something
> that was a bit easier to write.  Can you point me to some more info on
> "generators"?  Right now I'm just learning the language and trying to
> understand the various options and what the performance impact is.  It would
> be helpful if the documents (or some other site) gave some idea of the
> performance of various options.
>
> Carl
>

The generator work is on going on the master branch and is not
available on the release branch.

See 
http://docs.julialang.org/en/latest/manual/arrays/?highlight=generator#generator-expressions
for doc

>
>
> On Saturday, April 2, 2016 at 8:48:18 AM UTC-4, Yichao Yu wrote:
>>
>> On Sat, Apr 2, 2016 at 8:33 AM, Erik Schnetter  wrote:
>> > Can you put a number on the task creating and task switching overhead?
>>
>> The most obvious expensive part is the `jl_setjmp` and `jl_longjmp`.
>> Not really sure how long those takes.
>> There are also other overhead like finding another task to run.
>>
>> For this particular example, the task version is doing
>>
>> 1. Box a value
>> 2. Switch task
>> 3. Return the boxed value
>> 4. Since it is not type stable (I don't think type inference can
>> handle tasks), a dynamic dispatch and another boxing for the result
>> 5. Switch task again.
>>
>> Every single one above are way more expensive than the integer addition.
>>
>> @Carl,
>>
>> If you want to do this in this style, I believe the right abstraction
>> is generator (i.e. iterator). The support is added on master and
>> there's work on making it faster.
>>
>> >
>> > For example, if everything runs on a single core, task switching could
>> > (theoretically) happen within 100 ns on a modern CPU. Whether that is
>> > the case depends on the tradeoffs and choices made during design and
>> > implementation, hence the question.
>> >
>> > -erik
>> >
>> > On Sat, Apr 2, 2016 at 8:28 AM, Yichao Yu  wrote:
>> >> On Fri, Apr 1, 2016 at 3:45 PM, Carl  wrote:
>> >>> Hello,
>> >>>
>> >>> Julia is great.  I'm new fan and trying to figure out how to write
>> >>> simple
>> >>> coroutines that are fast.  So far I have this:
>> >>>
>> >>>
>> >>> function task_iter(vec)
>> >>> @task begin
>> >>> i = 1
>> >>> for v in vec
>> >>> produce(v)
>> >>> end
>> >>> end
>> >>> end
>> >>>
>> >>> function task_sum(vec)
>> >>> s = 0.0
>> >>> for val in task_iter(vec)
>> >>> s += val
>> >>> end
>> >>> return s
>> >>> end
>> >>>
>> >>> function normal_sum(vec)
>> >>> s = 0.0
>> >>> for val in vec
>> >>> s += val
>> >>> end
>> >>> return s
>> >>> end
>> >>>
>> >>> values = rand(10^6)
>> >>> task_sum(values)
>> >>> normal_sum(values)
>> >>>
>> >>> @time task_sum(values)
>> >>> @time normal_sum(values)
>> >>>
>> >>>
>> >>>
>> >>>   1.067081 seconds (2.00 M allocations: 30.535 MB, 1.95% gc time)
>> >>>   0.006656 seconds (5 allocations: 176 bytes)
>> >>>
>> >>> I was hoping to be able to get the speeds to match (as close as
>> >>> possible).
>> >>> I've read the performance tips and I can't find anything I'm doing
>> >>> wrong.  I
>> >>> also tried out 0.5 thinking that maybe it would be faster with
>> >>> supporting
>> >>> fast anonymous functions but it was slower (1.5 seconds).
>> >>
>> >> Tasks are expensive and are basically designed for IO.
>> >> ~1000x slow down for this simple stuff is expected.
>> >>
>> >>>
>> >>>
>> >>> Carl
>> >>>
>> >
>> >
>> >
>> > --
>> > Erik Schnetter 
>> > http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Fast coroutines

2016-04-02 Thread Glen H
Hi,

Thanks everyone for the helpful comments.  If this isn't the way to go then 
what do you recommend for a good tradeoff between fast and easy to write 
iterations?  I have tried iterating with using the iteration protocol 
(start(), next(), done()) and it is very fast but was looking for something 
that was a bit easier to write.  Can you point me to some more info on 
"generators"?  Right now I'm just learning the language and trying to 
understand the various options and what the performance impact is.  It 
would be helpful if the documents (or some other site) gave some idea of 
the performance of various options.

Carl



On Saturday, April 2, 2016 at 8:48:18 AM UTC-4, Yichao Yu wrote:
>
> On Sat, Apr 2, 2016 at 8:33 AM, Erik Schnetter  > wrote: 
> > Can you put a number on the task creating and task switching overhead? 
>
> The most obvious expensive part is the `jl_setjmp` and `jl_longjmp`. 
> Not really sure how long those takes. 
> There are also other overhead like finding another task to run. 
>
> For this particular example, the task version is doing 
>
> 1. Box a value 
> 2. Switch task 
> 3. Return the boxed value 
> 4. Since it is not type stable (I don't think type inference can 
> handle tasks), a dynamic dispatch and another boxing for the result 
> 5. Switch task again. 
>
> Every single one above are way more expensive than the integer addition. 
>
> @Carl, 
>
> If you want to do this in this style, I believe the right abstraction 
> is generator (i.e. iterator). The support is added on master and 
> there's work on making it faster. 
>
> > 
> > For example, if everything runs on a single core, task switching could 
> > (theoretically) happen within 100 ns on a modern CPU. Whether that is 
> > the case depends on the tradeoffs and choices made during design and 
> > implementation, hence the question. 
> > 
> > -erik 
> > 
> > On Sat, Apr 2, 2016 at 8:28 AM, Yichao Yu  > wrote: 
> >> On Fri, Apr 1, 2016 at 3:45 PM, Carl > 
> wrote: 
> >>> Hello, 
> >>> 
> >>> Julia is great.  I'm new fan and trying to figure out how to write 
> simple 
> >>> coroutines that are fast.  So far I have this: 
> >>> 
> >>> 
> >>> function task_iter(vec) 
> >>> @task begin 
> >>> i = 1 
> >>> for v in vec 
> >>> produce(v) 
> >>> end 
> >>> end 
> >>> end 
> >>> 
> >>> function task_sum(vec) 
> >>> s = 0.0 
> >>> for val in task_iter(vec) 
> >>> s += val 
> >>> end 
> >>> return s 
> >>> end 
> >>> 
> >>> function normal_sum(vec) 
> >>> s = 0.0 
> >>> for val in vec 
> >>> s += val 
> >>> end 
> >>> return s 
> >>> end 
> >>> 
> >>> values = rand(10^6) 
> >>> task_sum(values) 
> >>> normal_sum(values) 
> >>> 
> >>> @time task_sum(values) 
> >>> @time normal_sum(values) 
> >>> 
> >>> 
> >>> 
> >>>   1.067081 seconds (2.00 M allocations: 30.535 MB, 1.95% gc time) 
> >>>   0.006656 seconds (5 allocations: 176 bytes) 
> >>> 
> >>> I was hoping to be able to get the speeds to match (as close as 
> possible). 
> >>> I've read the performance tips and I can't find anything I'm doing 
> wrong.  I 
> >>> also tried out 0.5 thinking that maybe it would be faster with 
> supporting 
> >>> fast anonymous functions but it was slower (1.5 seconds). 
> >> 
> >> Tasks are expensive and are basically designed for IO. 
> >> ~1000x slow down for this simple stuff is expected. 
> >> 
> >>> 
> >>> 
> >>> Carl 
> >>> 
> > 
> > 
> > 
> > -- 
> > Erik Schnetter > 
> > http://www.perimeterinstitute.ca/personal/eschnetter/ 
>


Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Cedric St-Jean


On Saturday, April 2, 2016 at 5:39:45 PM UTC-4, Greg Plowman wrote:
>
> Cedric,
> On my machine fast() and pretty_fast() run in the roughly the same time.
> Are you sure pre-compiled first?
>

Yep. I'm using @benchmark on Julia 0.4.5 on OSX, and the time difference is 
definitely > 2X. 
 

>  
> 1. If you add a default fallback method, say, evaluate(f::Feature) = -1.0
> Theoretically, would that inform the compiler that evaluate() always 
> returns a Float64 and therefore is type stable?
>

No, because I might write 

type C <: Feature
end
evaluate(f::C) = 100

and a Vector{Feature} might contain a C. Abstract types are open-ended, new 
types can be created a runtime, so the compiler can't assume anything about 
them, which is why they're not useful for performance. See here 

.

 

> Does dynamic dispatch mean compiler has to look up method at run time 
> (because it can't know type ahead of time).
> This is equivalent to explicitly coding "static dispatch" with if 
> statements?
> If so then why is it so much slower?
> Providing a fallback method as in 1. or explicitly returning Float64 
> doesn't seem to improve speed, which naively suggests that slowness is from 
> dynamic dispatch not from type instability???
>
> 3. Also, on my machine pretty_fast() is as fast as fast(). Why is this so, 
> if pretty_fast() is supposedly type unstable?
>
> As you can probably tell, I'm pretty confused on this.
>
>
> On Sunday, April 3, 2016 at 6:34:09 AM UTC+10, Cedric St-Jean wrote:
>
>> Thank you for the detailed explanation. I tried it out:
>>
>> function pretty_fast(features::Vector{Feature})
>> retval = 0.0
>> for i in 1 : length(features)
>> if isa(features[i], A)
>> x = evaluate(features[i]::A)
>> elseif isa(features[i], B)
>> x = evaluate(features[i]::B)
>> else
>> x = evaluate(features[i])
>> end
>> retval += x
>> end
>> retval
>> end
>>
>> On my laptop, fast runs in 10 microseconds, pretty_fast in 30, and slow 
>> in 210.
>>
>> On Saturday, April 2, 2016 at 12:24:18 PM UTC-4, Yichao Yu wrote:
>>>
>>> On Sat, Apr 2, 2016 at 12:16 PM, Tim Wheeler  
>>> wrote: 
>>> > Thank you for the comments. In my original code it means the 
>>> difference 
>>> > between a 30 min execution with memory allocation in the Gigabytes and 
>>> a few 
>>> > seconds of execution with only 800 bytes using the second version. 
>>> > I thought under-the-hood Julia basically runs those if statements 
>>> anyway for 
>>> > its dispatch, and don't know why it needs to allocate any memory. 
>>> > Having the if-statement workaround will be fine though. 
>>>
>>> Well, if you have a lot of these cheap functions being dynamically 
>>> dispatched I think it is not a good way to use the type. Depending on 
>>> your problem, you may be better off using a enum/flags/dict to 
>>> represent the type/get the values. 
>>>
>>> The reason for the allocation is that the return type is unknown. It 
>>> should be obvious to see if you check your code with code_warntype. 
>>>
>>> > 
>>> > On Saturday, April 2, 2016 at 7:26:11 AM UTC-7, Cedric St-Jean wrote: 
>>> >> 
>>> >> 
>>> >>> Therefore there's no way the compiler can rewrite the slow version 
>>> to the 
>>> >>> fast version. 
>>> >> 
>>> >> 
>>> >> It knows that the element type is a Feature, so it could produce: 
>>> >> 
>>> >> if isa(features[i], A) 
>>> >> retval += evaluate(features[i]::A) 
>>> >> elseif isa(features[i], B) 
>>> >> retval += evaluate(features[i]::B) 
>>> >> else 
>>> >> retval += evaluate(features[i]) 
>>> >> end 
>>> >> 
>>> >> and it would make sense for abstract types that have few subtypes. I 
>>> >> didn't realize that dispatch was an order of magnitude slower than 
>>> type 
>>> >> checking. It's easy enough to write a macro generating this 
>>> expansion, too. 
>>> >> 
>>> >> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote: 
>>> >>> 
>>> >>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler  
>>> >>> wrote: 
>>> >>> > Hello Julia Users. 
>>> >>> > 
>>> >>> > I ran into a weird slowdown issue and reproduced a minimal working 
>>> >>> > example. 
>>> >>> > Maybe someone can help shed some light. 
>>> >>> > 
>>> >>> > abstract Feature 
>>> >>> > 
>>> >>> > type A <: Feature end 
>>> >>> > evaluate(f::A) = 1.0 
>>> >>> > 
>>> >>> > type B <: Feature end 
>>> >>> > evaluate(f::B) = 0.0 
>>> >>> > 
>>> >>> > function slow(features::Vector{Feature}) 
>>> >>> > retval = 0.0 
>>> >>> > for i in 1 : length(features) 
>>> >>> > retval += evaluate(features[i]) 
>>> >>> > end 
>>> >>> > retval 
>>> >>> > end 
>>> >>> > 
>>> >>> > function fast(features::Vector{Feature}) 
>>> >>> > retval = 0.0 
>>> >>> > for i in 1 : length(features) 
>>> >>> > if isa(features[i], A) 
>>> >>> > retval += evaluate(features[i]::A) 
>>> >>> >  

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Yichao Yu
On Sat, Apr 2, 2016 at 5:39 PM, 'Greg Plowman' via julia-users
 wrote:
> Cedric,
> On my machine fast() and pretty_fast() run in the roughly the same time.
> Are you sure pre-compiled first?
>
> Yichao,
>>
>> The compiler has no idea what the return type of the third one so this
>> version is still type unstable and you get dynamic dispatch at every
>> iteration for the floating point add
>
>
> 1. If you add a default fallback method, say, evaluate(f::Feature) = -1.0
> Theoretically, would that inform the compiler that evaluate() always returns
> a Float64 and therefore is type stable?

No
See also https://github.com/JuliaLang/julia/issues/1090

>
> 2. I don't get the connection between "dynamic dispatch" and "type
> stability".
> Does dynamic dispatch mean compiler has to look up method at run time
> (because it can't know type ahead of time).

Yes.

> This is equivalent to explicitly coding "static dispatch" with if
> statements?
> If so then why is it so much slower?

Since there may be other types that it doesn't know yet. Although we
may be able to solve https://github.com/JuliaLang/julia/issues/265 in
a way that allow the compiler to exploit this.

> Providing a fallback method as in 1. or explicitly returning Float64 doesn't
> seem to improve speed, which naively suggests that slowness is from dynamic
> dispatch not from type instability???
>
> 3. Also, on my machine pretty_fast() is as fast as fast(). Why is this so,
> if pretty_fast() is supposedly type unstable?
>
> As you can probably tell, I'm pretty confused on this.
>

code_warntype
code_llvm

>
> On Sunday, April 3, 2016 at 6:34:09 AM UTC+10, Cedric St-Jean wrote:
>>
>> Thank you for the detailed explanation. I tried it out:
>>
>> function pretty_fast(features::Vector{Feature})
>> retval = 0.0
>> for i in 1 : length(features)
>> if isa(features[i], A)
>> x = evaluate(features[i]::A)
>> elseif isa(features[i], B)
>> x = evaluate(features[i]::B)
>> else
>> x = evaluate(features[i])
>> end
>> retval += x
>> end
>> retval
>> end
>>
>> On my laptop, fast runs in 10 microseconds, pretty_fast in 30, and slow in
>> 210.
>>
>> On Saturday, April 2, 2016 at 12:24:18 PM UTC-4, Yichao Yu wrote:
>>>
>>> On Sat, Apr 2, 2016 at 12:16 PM, Tim Wheeler 
>>> wrote:
>>> > Thank you for the comments. In my original code it means the difference
>>> > between a 30 min execution with memory allocation in the Gigabytes and
>>> > a few
>>> > seconds of execution with only 800 bytes using the second version.
>>> > I thought under-the-hood Julia basically runs those if statements
>>> > anyway for
>>> > its dispatch, and don't know why it needs to allocate any memory.
>>> > Having the if-statement workaround will be fine though.
>>>
>>> Well, if you have a lot of these cheap functions being dynamically
>>> dispatched I think it is not a good way to use the type. Depending on
>>> your problem, you may be better off using a enum/flags/dict to
>>> represent the type/get the values.
>>>
>>> The reason for the allocation is that the return type is unknown. It
>>> should be obvious to see if you check your code with code_warntype.
>>>
>>> >
>>> > On Saturday, April 2, 2016 at 7:26:11 AM UTC-7, Cedric St-Jean wrote:
>>> >>
>>> >>
>>> >>> Therefore there's no way the compiler can rewrite the slow version to
>>> >>> the
>>> >>> fast version.
>>> >>
>>> >>
>>> >> It knows that the element type is a Feature, so it could produce:
>>> >>
>>> >> if isa(features[i], A)
>>> >> retval += evaluate(features[i]::A)
>>> >> elseif isa(features[i], B)
>>> >> retval += evaluate(features[i]::B)
>>> >> else
>>> >> retval += evaluate(features[i])
>>> >> end
>>> >>
>>> >> and it would make sense for abstract types that have few subtypes. I
>>> >> didn't realize that dispatch was an order of magnitude slower than
>>> >> type
>>> >> checking. It's easy enough to write a macro generating this expansion,
>>> >> too.
>>> >>
>>> >> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote:
>>> >>>
>>> >>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler 
>>> >>> wrote:
>>> >>> > Hello Julia Users.
>>> >>> >
>>> >>> > I ran into a weird slowdown issue and reproduced a minimal working
>>> >>> > example.
>>> >>> > Maybe someone can help shed some light.
>>> >>> >
>>> >>> > abstract Feature
>>> >>> >
>>> >>> > type A <: Feature end
>>> >>> > evaluate(f::A) = 1.0
>>> >>> >
>>> >>> > type B <: Feature end
>>> >>> > evaluate(f::B) = 0.0
>>> >>> >
>>> >>> > function slow(features::Vector{Feature})
>>> >>> > retval = 0.0
>>> >>> > for i in 1 : length(features)
>>> >>> > retval += evaluate(features[i])
>>> >>> > end
>>> >>> > retval
>>> >>> > end
>>> >>> >
>>> >>> > function fast(features::Vector{Feature})
>>> >>> > retval = 0.0
>>> >>> > for i in 1 : length(features)
>>> >>> > if isa(features[i], A)
>>> >>> > retval += evaluate(features[i]::A)
>>> >>> > 

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread 'Greg Plowman' via julia-users
Cedric,
On my machine fast() and pretty_fast() run in the roughly the same time.
Are you sure pre-compiled first?

Yichao,

> The compiler has no idea what the return type of the third one so this 
> version is still type unstable and you get dynamic dispatch at every 
> iteration for the floating point add

 
1. If you add a default fallback method, say, evaluate(f::Feature) = -1.0
Theoretically, would that inform the compiler that evaluate() always 
returns a Float64 and therefore is type stable?

2. I don't get the connection between "dynamic dispatch" and "type 
stability".
Does dynamic dispatch mean compiler has to look up method at run time 
(because it can't know type ahead of time).
This is equivalent to explicitly coding "static dispatch" with if 
statements?
If so then why is it so much slower?
Providing a fallback method as in 1. or explicitly returning Float64 
doesn't seem to improve speed, which naively suggests that slowness is from 
dynamic dispatch not from type instability???

3. Also, on my machine pretty_fast() is as fast as fast(). Why is this so, 
if pretty_fast() is supposedly type unstable?

As you can probably tell, I'm pretty confused on this.


On Sunday, April 3, 2016 at 6:34:09 AM UTC+10, Cedric St-Jean wrote:

> Thank you for the detailed explanation. I tried it out:
>
> function pretty_fast(features::Vector{Feature})
> retval = 0.0
> for i in 1 : length(features)
> if isa(features[i], A)
> x = evaluate(features[i]::A)
> elseif isa(features[i], B)
> x = evaluate(features[i]::B)
> else
> x = evaluate(features[i])
> end
> retval += x
> end
> retval
> end
>
> On my laptop, fast runs in 10 microseconds, pretty_fast in 30, and slow in 
> 210.
>
> On Saturday, April 2, 2016 at 12:24:18 PM UTC-4, Yichao Yu wrote:
>>
>> On Sat, Apr 2, 2016 at 12:16 PM, Tim Wheeler  
>> wrote: 
>> > Thank you for the comments. In my original code it means the difference 
>> > between a 30 min execution with memory allocation in the Gigabytes and 
>> a few 
>> > seconds of execution with only 800 bytes using the second version. 
>> > I thought under-the-hood Julia basically runs those if statements 
>> anyway for 
>> > its dispatch, and don't know why it needs to allocate any memory. 
>> > Having the if-statement workaround will be fine though. 
>>
>> Well, if you have a lot of these cheap functions being dynamically 
>> dispatched I think it is not a good way to use the type. Depending on 
>> your problem, you may be better off using a enum/flags/dict to 
>> represent the type/get the values. 
>>
>> The reason for the allocation is that the return type is unknown. It 
>> should be obvious to see if you check your code with code_warntype. 
>>
>> > 
>> > On Saturday, April 2, 2016 at 7:26:11 AM UTC-7, Cedric St-Jean wrote: 
>> >> 
>> >> 
>> >>> Therefore there's no way the compiler can rewrite the slow version to 
>> the 
>> >>> fast version. 
>> >> 
>> >> 
>> >> It knows that the element type is a Feature, so it could produce: 
>> >> 
>> >> if isa(features[i], A) 
>> >> retval += evaluate(features[i]::A) 
>> >> elseif isa(features[i], B) 
>> >> retval += evaluate(features[i]::B) 
>> >> else 
>> >> retval += evaluate(features[i]) 
>> >> end 
>> >> 
>> >> and it would make sense for abstract types that have few subtypes. I 
>> >> didn't realize that dispatch was an order of magnitude slower than 
>> type 
>> >> checking. It's easy enough to write a macro generating this expansion, 
>> too. 
>> >> 
>> >> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote: 
>> >>> 
>> >>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler  
>> >>> wrote: 
>> >>> > Hello Julia Users. 
>> >>> > 
>> >>> > I ran into a weird slowdown issue and reproduced a minimal working 
>> >>> > example. 
>> >>> > Maybe someone can help shed some light. 
>> >>> > 
>> >>> > abstract Feature 
>> >>> > 
>> >>> > type A <: Feature end 
>> >>> > evaluate(f::A) = 1.0 
>> >>> > 
>> >>> > type B <: Feature end 
>> >>> > evaluate(f::B) = 0.0 
>> >>> > 
>> >>> > function slow(features::Vector{Feature}) 
>> >>> > retval = 0.0 
>> >>> > for i in 1 : length(features) 
>> >>> > retval += evaluate(features[i]) 
>> >>> > end 
>> >>> > retval 
>> >>> > end 
>> >>> > 
>> >>> > function fast(features::Vector{Feature}) 
>> >>> > retval = 0.0 
>> >>> > for i in 1 : length(features) 
>> >>> > if isa(features[i], A) 
>> >>> > retval += evaluate(features[i]::A) 
>> >>> > else 
>> >>> > retval += evaluate(features[i]::B) 
>> >>> > end 
>> >>> > end 
>> >>> > retval 
>> >>> > end 
>> >>> > 
>> >>> > using ProfileView 
>> >>> > 
>> >>> > features = Feature[] 
>> >>> > for i in 1 : 1 
>> >>> > push!(features, A()) 
>> >>> > end 
>> >>> > 
>> >>> > slow(features) 
>> >>> > @time slow(features) 
>> >>> > fast(features) 
>> >>> > @time fast(features) 
>> >>> > 
>> >>> >

Re: [julia-users] string version of expressions not parseable

2016-04-02 Thread vishesh
Ah, thank you for that example.


On Saturday, April 2, 2016 at 11:02:07 AM UTC-7, Yichao Yu wrote:
>
> On Sat, Apr 2, 2016 at 1:02 PM,  > 
> wrote: 
> > I seem to be unable to get this to work 
> > 
> >>serialize(STDOUT, :(x + 1)) 
> > =b+1 
> > 
> >>deserialize(IOBuffer("=b+1")) 
> > :call 
> > 
> > which is only form 1 out of 4. 
>
> deserializing and serializing do no use a plain text format. 
>
> julia> io = IOBuffer() 
> IOBuffer(data=UInt8[...], readable=true, writable=true, seekable=true, 
> append=false, size=0, maxsize=Inf, ptr=1, mark=-1) 
>
> julia> serialize(io, :(x + 1)) 
>
> julia> seekstart(io) 
> IOBuffer(data=UInt8[...], readable=true, writable=true, seekable=true, 
> append=false, size=9, maxsize=Inf, ptr=1, mark=-1) 
>
> julia> deserialize(io) 
> :(x + 1) 
>
>
> > 
> > Also, could you give me an example of a situation where the aliasing is 
> an 
> > issue? I'm unclear when that pops up. 
>


Re: [julia-users] Re: How to specify a template argument as a super type

2016-04-02 Thread Scott Lundberg
Thanks, that makes more sense! In my case I wanted `Name1{S} <: S` but 
neither `Name1 <: S` or `S <: Name1` make sense.


On Saturday, April 2, 2016 at 12:47:07 PM UTC-7, Erik Schnetter wrote:
>
> Julia's type hierarchies use single inheritance only. Thus inheritance 
> mechanisms that work fine in C++ can't always be made to work in 
> Julia. There are discussions on introducing concepts (traits), but 
> that's still in a prototype stage. (I'm using traits 
>  in a project of mine; this works 
> fine, only the error messages are sometimes confusing.) 
>
> If you have `Name1{S <: Name2} <: S`, and you need to stick with 
> single inheritance, then you essentially define both 
>
> `Name1{S} <: Name1 
> `Name1{S} <: S` 
>
> (here `Name1` is the abstract type that Julia introduces for every 
> parameterized type; `Name1` is different from `Name1{S}`). 
>
> Since Julia uses single inheritance, you also need to have either 
> `Name1 <: S` or `S <: Name1`. Both would be strange, since they would 
> introduce a relation between `Name1` and `S`, but you haven't 
> specified `S`. 
>
> -erik 
>
>
> On Sat, Apr 2, 2016 at 3:36 PM, Scott Lundberg  > wrote: 
> > Thanks Erik, though I am not quite clear how it would break things. 
> > 
> > As for the real goal: 
> > 
> > Name2 is also templated such as Name2{A} and Name2{B}. There are lots of 
> > methods that dispatch depending on Name2{A} or Name2{B}. 
> > 
> > I want to add a subtype of Name2 (which I call Name1) that still 
> correctly 
> > dispatches for those methods. Another way I would like to do it is: 
> > 
> > abstract Name1{Name2{T <: AvsB} <: Name2{T} 
> > 
> > but of course that also does not work. 
> > 
> > Thanks! 
> > 
> > On Saturday, April 2, 2016 at 11:55:47 AM UTC-7, Erik Schnetter wrote: 
> >> 
> >> If you define an abstract type `Name1{S}`, then Julia automatically 
> >> defines another abstract type `Name` with `Name1{S} <: Name`. 
> >> 
> >> This would break if you tried to insert `S` into the type hierarchy as 
> >> well. 
> >> 
> >> -erik 
> >> 
> >> On Sat, Apr 2, 2016 at 2:51 PM, Tomas Lycken  
> wrote: 
> >> > If it's abstract, what is the purpose of the type parameter? In other 
> >> > words, 
> >> > what is the ultimate goal you're reaching for? 
> >> > 
> >> > //T 
> >> > 
> >> > On Saturday, April 2, 2016 at 3:17:34 AM UTC+2, Scott Lundberg wrote: 
> >> >> 
> >> >> You can define: 
> >> >> 
> >> >> abstract Name1{S <: Name2} <: Name3{S} 
> >> >> 
> >> >> but I want to define: 
> >> >> 
> >> >> abstract Name1{S <: Name2} <: S 
> >> >> 
> >> >> However the second line does not work. Is there some other way to 
> >> >> accomplish the same thing? Thanks! 
> >> 
> >> 
> >> 
> >> -- 
> >> Erik Schnetter  
> >> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>
>
>
> -- 
> Erik Schnetter > 
> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>


Re: [julia-users] Dict comprehension not type stable unless wrapped in a function

2016-04-02 Thread Cedric St-Jean
0.5 should have generators, so I assume that it will be the preferred way 
to write a dict comprehension

https://github.com/JuliaLang/julia/pull/14782

On Saturday, April 2, 2016 at 8:59:01 AM UTC-4, Tamas Papp wrote:
>
> On Sat, Apr 02 2016, Yichao Yu wrote: 
>
> > On Sat, Apr 2, 2016 at 8:35 AM, Tamas Papp  > wrote: 
> >> On Sat, Apr 02 2016, Yichao Yu wrote: 
> >> 
> >>> On Sat, Apr 2, 2016 at 2:28 AM, Tamas Papp  > wrote: 
>  On Fri, Apr 01 2016, Yichao Yu wrote: 
>  
> > On Fri, Apr 1, 2016 at 5:33 AM, Tamas Papp  > wrote: 
> >> Hi, 
> >> 
> >> I ran into a problem with the result type of Dict comprehensions. 
> If I 
> >> wrap the comprehension in a function, it produces the narrowest 
> type, 
> >> otherwise the type of value is Any. Code looks like this: 
> > 
> > https://github.com/JuliaLang/julia/issues/7258 
>  
>  Sorry, I don't understand why it is the same issue. The type of 
>  arguments in the comprehension is always the same, just that one 
> version 
>  is wrapped in a function inside a function, and the other one isn't. 
> >>> 
> >>> And that's exactly where type inference sensitivity comes in. One of 
> >>> them is inferable and the other not. 
> >> 
> >> Thanks, now I think I get it. So for the time being, is wrapping 
> >> something in a function to make it inferable a reasonable general 
> >> workaround? 
> >> 
> > 
> > Depend on what you need. If it works than sure. 
> > Supplying the type manually also works, if you can easily determine it 
> > of course. 
>
> Rereading #7258 I convinced myself that I should supply the type 
> manually. Two questions: 
>
> 1. Is Dict{keytype,valuetype}([...my comprehension...]) the preferred 
> syntax, that will survive to 0.5? 
>
> 2. If I am computing the type of A/B and I know that their types are 
> TypeA and TypeB, what's the preferred way of computing the result type? 
> I came up with 
>
> typeof(one(TypeA)/one(TypeB)) 
>
> but maybe there is something more elegant. 
>
> Best, 
>
> Tamas 
>


Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Cedric St-Jean
I tried that, but it has no impact on performance, in either pretty_fast or 
slow. I don't understand why...

retval += x::Float64


On Saturday, April 2, 2016 at 4:42:01 PM UTC-4, Matt Bauman wrote:
>
> Perhaps a simpler solution would be to just assert that `evaluate` always 
> returns a Float64?  You may even be able to remove the isa branches in that 
> case, but I'm not sure how it'll compare.
>
> On Saturday, April 2, 2016 at 4:34:09 PM UTC-4, Cedric St-Jean wrote:
>>
>> Thank you for the detailed explanation. I tried it out:
>>
>> function pretty_fast(features::Vector{Feature})
>> retval = 0.0
>> for i in 1 : length(features)
>> if isa(features[i], A)
>> x = evaluate(features[i]::A)
>> elseif isa(features[i], B)
>> x = evaluate(features[i]::B)
>> else
>> x = evaluate(features[i])
>> end
>> retval += x
>> end
>> retval
>> end
>>
>> On my laptop, fast runs in 10 microseconds, pretty_fast in 30, and slow 
>> in 210.
>>
>> On Saturday, April 2, 2016 at 12:24:18 PM UTC-4, Yichao Yu wrote:
>>>
>>> On Sat, Apr 2, 2016 at 12:16 PM, Tim Wheeler  
>>> wrote: 
>>> > Thank you for the comments. In my original code it means the 
>>> difference 
>>> > between a 30 min execution with memory allocation in the Gigabytes and 
>>> a few 
>>> > seconds of execution with only 800 bytes using the second version. 
>>> > I thought under-the-hood Julia basically runs those if statements 
>>> anyway for 
>>> > its dispatch, and don't know why it needs to allocate any memory. 
>>> > Having the if-statement workaround will be fine though. 
>>>
>>> Well, if you have a lot of these cheap functions being dynamically 
>>> dispatched I think it is not a good way to use the type. Depending on 
>>> your problem, you may be better off using a enum/flags/dict to 
>>> represent the type/get the values. 
>>>
>>> The reason for the allocation is that the return type is unknown. It 
>>> should be obvious to see if you check your code with code_warntype. 
>>>
>>> > 
>>> > On Saturday, April 2, 2016 at 7:26:11 AM UTC-7, Cedric St-Jean wrote: 
>>> >> 
>>> >> 
>>> >>> Therefore there's no way the compiler can rewrite the slow version 
>>> to the 
>>> >>> fast version. 
>>> >> 
>>> >> 
>>> >> It knows that the element type is a Feature, so it could produce: 
>>> >> 
>>> >> if isa(features[i], A) 
>>> >> retval += evaluate(features[i]::A) 
>>> >> elseif isa(features[i], B) 
>>> >> retval += evaluate(features[i]::B) 
>>> >> else 
>>> >> retval += evaluate(features[i]) 
>>> >> end 
>>> >> 
>>> >> and it would make sense for abstract types that have few subtypes. I 
>>> >> didn't realize that dispatch was an order of magnitude slower than 
>>> type 
>>> >> checking. It's easy enough to write a macro generating this 
>>> expansion, too. 
>>> >> 
>>> >> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote: 
>>> >>> 
>>> >>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler  
>>> >>> wrote: 
>>> >>> > Hello Julia Users. 
>>> >>> > 
>>> >>> > I ran into a weird slowdown issue and reproduced a minimal working 
>>> >>> > example. 
>>> >>> > Maybe someone can help shed some light. 
>>> >>> > 
>>> >>> > abstract Feature 
>>> >>> > 
>>> >>> > type A <: Feature end 
>>> >>> > evaluate(f::A) = 1.0 
>>> >>> > 
>>> >>> > type B <: Feature end 
>>> >>> > evaluate(f::B) = 0.0 
>>> >>> > 
>>> >>> > function slow(features::Vector{Feature}) 
>>> >>> > retval = 0.0 
>>> >>> > for i in 1 : length(features) 
>>> >>> > retval += evaluate(features[i]) 
>>> >>> > end 
>>> >>> > retval 
>>> >>> > end 
>>> >>> > 
>>> >>> > function fast(features::Vector{Feature}) 
>>> >>> > retval = 0.0 
>>> >>> > for i in 1 : length(features) 
>>> >>> > if isa(features[i], A) 
>>> >>> > retval += evaluate(features[i]::A) 
>>> >>> > else 
>>> >>> > retval += evaluate(features[i]::B) 
>>> >>> > end 
>>> >>> > end 
>>> >>> > retval 
>>> >>> > end 
>>> >>> > 
>>> >>> > using ProfileView 
>>> >>> > 
>>> >>> > features = Feature[] 
>>> >>> > for i in 1 : 1 
>>> >>> > push!(features, A()) 
>>> >>> > end 
>>> >>> > 
>>> >>> > slow(features) 
>>> >>> > @time slow(features) 
>>> >>> > fast(features) 
>>> >>> > @time fast(features) 
>>> >>> > 
>>> >>> > The output is: 
>>> >>> > 
>>> >>> > 0.000136 seconds (10.15 k allocations: 166.417 KB) 
>>> >>> > 0.12 seconds (5 allocations: 176 bytes) 
>>> >>> > 
>>> >>> > 
>>> >>> > This is a HUGE difference! Am I missing something big? Is there a 
>>> good 
>>> >>> > way 
>>> >>> > to inspect code to figure out where I am going wrong? 
>>> >>> 
>>> >>> This is because of type instability as you will find in the 
>>> performance 
>>> >>> tips. 
>>> >>> Note that slow and fast are not equivalent since the fast version 
>>> only 
>>> >>> accept `A` or `B` but the slow version accepts any subtype of 
>>> feature 
>>> >>> that you may ever define. Therefore th

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Matt Bauman
Perhaps a simpler solution would be to just assert that `evaluate` always 
returns a Float64?  You may even be able to remove the isa branches in that 
case, but I'm not sure how it'll compare.

On Saturday, April 2, 2016 at 4:34:09 PM UTC-4, Cedric St-Jean wrote:
>
> Thank you for the detailed explanation. I tried it out:
>
> function pretty_fast(features::Vector{Feature})
> retval = 0.0
> for i in 1 : length(features)
> if isa(features[i], A)
> x = evaluate(features[i]::A)
> elseif isa(features[i], B)
> x = evaluate(features[i]::B)
> else
> x = evaluate(features[i])
> end
> retval += x
> end
> retval
> end
>
> On my laptop, fast runs in 10 microseconds, pretty_fast in 30, and slow in 
> 210.
>
> On Saturday, April 2, 2016 at 12:24:18 PM UTC-4, Yichao Yu wrote:
>>
>> On Sat, Apr 2, 2016 at 12:16 PM, Tim Wheeler  
>> wrote: 
>> > Thank you for the comments. In my original code it means the difference 
>> > between a 30 min execution with memory allocation in the Gigabytes and 
>> a few 
>> > seconds of execution with only 800 bytes using the second version. 
>> > I thought under-the-hood Julia basically runs those if statements 
>> anyway for 
>> > its dispatch, and don't know why it needs to allocate any memory. 
>> > Having the if-statement workaround will be fine though. 
>>
>> Well, if you have a lot of these cheap functions being dynamically 
>> dispatched I think it is not a good way to use the type. Depending on 
>> your problem, you may be better off using a enum/flags/dict to 
>> represent the type/get the values. 
>>
>> The reason for the allocation is that the return type is unknown. It 
>> should be obvious to see if you check your code with code_warntype. 
>>
>> > 
>> > On Saturday, April 2, 2016 at 7:26:11 AM UTC-7, Cedric St-Jean wrote: 
>> >> 
>> >> 
>> >>> Therefore there's no way the compiler can rewrite the slow version to 
>> the 
>> >>> fast version. 
>> >> 
>> >> 
>> >> It knows that the element type is a Feature, so it could produce: 
>> >> 
>> >> if isa(features[i], A) 
>> >> retval += evaluate(features[i]::A) 
>> >> elseif isa(features[i], B) 
>> >> retval += evaluate(features[i]::B) 
>> >> else 
>> >> retval += evaluate(features[i]) 
>> >> end 
>> >> 
>> >> and it would make sense for abstract types that have few subtypes. I 
>> >> didn't realize that dispatch was an order of magnitude slower than 
>> type 
>> >> checking. It's easy enough to write a macro generating this expansion, 
>> too. 
>> >> 
>> >> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote: 
>> >>> 
>> >>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler  
>> >>> wrote: 
>> >>> > Hello Julia Users. 
>> >>> > 
>> >>> > I ran into a weird slowdown issue and reproduced a minimal working 
>> >>> > example. 
>> >>> > Maybe someone can help shed some light. 
>> >>> > 
>> >>> > abstract Feature 
>> >>> > 
>> >>> > type A <: Feature end 
>> >>> > evaluate(f::A) = 1.0 
>> >>> > 
>> >>> > type B <: Feature end 
>> >>> > evaluate(f::B) = 0.0 
>> >>> > 
>> >>> > function slow(features::Vector{Feature}) 
>> >>> > retval = 0.0 
>> >>> > for i in 1 : length(features) 
>> >>> > retval += evaluate(features[i]) 
>> >>> > end 
>> >>> > retval 
>> >>> > end 
>> >>> > 
>> >>> > function fast(features::Vector{Feature}) 
>> >>> > retval = 0.0 
>> >>> > for i in 1 : length(features) 
>> >>> > if isa(features[i], A) 
>> >>> > retval += evaluate(features[i]::A) 
>> >>> > else 
>> >>> > retval += evaluate(features[i]::B) 
>> >>> > end 
>> >>> > end 
>> >>> > retval 
>> >>> > end 
>> >>> > 
>> >>> > using ProfileView 
>> >>> > 
>> >>> > features = Feature[] 
>> >>> > for i in 1 : 1 
>> >>> > push!(features, A()) 
>> >>> > end 
>> >>> > 
>> >>> > slow(features) 
>> >>> > @time slow(features) 
>> >>> > fast(features) 
>> >>> > @time fast(features) 
>> >>> > 
>> >>> > The output is: 
>> >>> > 
>> >>> > 0.000136 seconds (10.15 k allocations: 166.417 KB) 
>> >>> > 0.12 seconds (5 allocations: 176 bytes) 
>> >>> > 
>> >>> > 
>> >>> > This is a HUGE difference! Am I missing something big? Is there a 
>> good 
>> >>> > way 
>> >>> > to inspect code to figure out where I am going wrong? 
>> >>> 
>> >>> This is because of type instability as you will find in the 
>> performance 
>> >>> tips. 
>> >>> Note that slow and fast are not equivalent since the fast version 
>> only 
>> >>> accept `A` or `B` but the slow version accepts any subtype of feature 
>> >>> that you may ever define. Therefore there's no way the compiler can 
>> >>> rewrite the slow version to the fast version. 
>> >>> There are optimizations that can be applied to bring down the gap but 
>> >>> there'll always be a large difference between the two. 
>> >>> 
>> >>> > 
>> >>> > 
>> >>> > Thank you in advance for any guidance. 
>> >>> > 
>> >>> > 
>> >>> > -Tim 
>> >>> > 
>> >>> >

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Cedric St-Jean
Thank you for the detailed explanation. I tried it out:

function pretty_fast(features::Vector{Feature})
retval = 0.0
for i in 1 : length(features)
if isa(features[i], A)
x = evaluate(features[i]::A)
elseif isa(features[i], B)
x = evaluate(features[i]::B)
else
x = evaluate(features[i])
end
retval += x
end
retval
end

On my laptop, fast runs in 10 microseconds, pretty_fast in 30, and slow in 
210.

On Saturday, April 2, 2016 at 12:24:18 PM UTC-4, Yichao Yu wrote:
>
> On Sat, Apr 2, 2016 at 12:16 PM, Tim Wheeler  > wrote: 
> > Thank you for the comments. In my original code it means the difference 
> > between a 30 min execution with memory allocation in the Gigabytes and a 
> few 
> > seconds of execution with only 800 bytes using the second version. 
> > I thought under-the-hood Julia basically runs those if statements anyway 
> for 
> > its dispatch, and don't know why it needs to allocate any memory. 
> > Having the if-statement workaround will be fine though. 
>
> Well, if you have a lot of these cheap functions being dynamically 
> dispatched I think it is not a good way to use the type. Depending on 
> your problem, you may be better off using a enum/flags/dict to 
> represent the type/get the values. 
>
> The reason for the allocation is that the return type is unknown. It 
> should be obvious to see if you check your code with code_warntype. 
>
> > 
> > On Saturday, April 2, 2016 at 7:26:11 AM UTC-7, Cedric St-Jean wrote: 
> >> 
> >> 
> >>> Therefore there's no way the compiler can rewrite the slow version to 
> the 
> >>> fast version. 
> >> 
> >> 
> >> It knows that the element type is a Feature, so it could produce: 
> >> 
> >> if isa(features[i], A) 
> >> retval += evaluate(features[i]::A) 
> >> elseif isa(features[i], B) 
> >> retval += evaluate(features[i]::B) 
> >> else 
> >> retval += evaluate(features[i]) 
> >> end 
> >> 
> >> and it would make sense for abstract types that have few subtypes. I 
> >> didn't realize that dispatch was an order of magnitude slower than type 
> >> checking. It's easy enough to write a macro generating this expansion, 
> too. 
> >> 
> >> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote: 
> >>> 
> >>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler  
> >>> wrote: 
> >>> > Hello Julia Users. 
> >>> > 
> >>> > I ran into a weird slowdown issue and reproduced a minimal working 
> >>> > example. 
> >>> > Maybe someone can help shed some light. 
> >>> > 
> >>> > abstract Feature 
> >>> > 
> >>> > type A <: Feature end 
> >>> > evaluate(f::A) = 1.0 
> >>> > 
> >>> > type B <: Feature end 
> >>> > evaluate(f::B) = 0.0 
> >>> > 
> >>> > function slow(features::Vector{Feature}) 
> >>> > retval = 0.0 
> >>> > for i in 1 : length(features) 
> >>> > retval += evaluate(features[i]) 
> >>> > end 
> >>> > retval 
> >>> > end 
> >>> > 
> >>> > function fast(features::Vector{Feature}) 
> >>> > retval = 0.0 
> >>> > for i in 1 : length(features) 
> >>> > if isa(features[i], A) 
> >>> > retval += evaluate(features[i]::A) 
> >>> > else 
> >>> > retval += evaluate(features[i]::B) 
> >>> > end 
> >>> > end 
> >>> > retval 
> >>> > end 
> >>> > 
> >>> > using ProfileView 
> >>> > 
> >>> > features = Feature[] 
> >>> > for i in 1 : 1 
> >>> > push!(features, A()) 
> >>> > end 
> >>> > 
> >>> > slow(features) 
> >>> > @time slow(features) 
> >>> > fast(features) 
> >>> > @time fast(features) 
> >>> > 
> >>> > The output is: 
> >>> > 
> >>> > 0.000136 seconds (10.15 k allocations: 166.417 KB) 
> >>> > 0.12 seconds (5 allocations: 176 bytes) 
> >>> > 
> >>> > 
> >>> > This is a HUGE difference! Am I missing something big? Is there a 
> good 
> >>> > way 
> >>> > to inspect code to figure out where I am going wrong? 
> >>> 
> >>> This is because of type instability as you will find in the 
> performance 
> >>> tips. 
> >>> Note that slow and fast are not equivalent since the fast version only 
> >>> accept `A` or `B` but the slow version accepts any subtype of feature 
> >>> that you may ever define. Therefore there's no way the compiler can 
> >>> rewrite the slow version to the fast version. 
> >>> There are optimizations that can be applied to bring down the gap but 
> >>> there'll always be a large difference between the two. 
> >>> 
> >>> > 
> >>> > 
> >>> > Thank you in advance for any guidance. 
> >>> > 
> >>> > 
> >>> > -Tim 
> >>> > 
> >>> > 
> >>> > 
> >>> > 
> >>> > 
>


[julia-users] Re: Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-02 Thread Matt Bauman
I don't want to get too deep into the weeds here, but I want to point out 
some things I like about Julia's closed ranges:

* Julia's ranges are just vectors of indices.  In exchange for giving up 
Python's offset style slicing, you get a fully-functional mathematical 
vector that supports all sorts of arithmetic. Idioms like `(-1:1)+i` allow 
you to very easily slide a 3-element window along your array.

* Since they represent a collection of indices instead of elements between 
two fenceposts, Julia's ranges will throw bounds errors if the endpoints go 
beyond the end of the array.  I find these sorts of errors extremely 
valuable — if I'm indexing with a N-element range I want N elements back.

* The only thing that's special about indexing by ranges is that they 
compute their elements on-the-fly and very efficiently.  You can create 
your own range-like array type very easily, and it can even generalize to 
multiple dimensions quite nicely.

* Being vectors themselves, you can index into range objects.  In fact, 
they will smartly re-compute new ranges upon indexing (if they can).

* In exchange for Python's negative indexing, you get the `end` keyword. It 
can be used directly in all sorts of computations.  In fact, you could use 
it in your example, replacing the hard-coded 100 with `end`. Now it 
supports arrays of all lengths. Circular buffers can be expressed as 
`buf[mod1(i, 
end)]`.

Of course there are trade-offs to either approach, and it takes time to 
adjust when moving from one system to the other.

If you work a lot with images and other higher-dimensional arrays, I 
recommend taking a look at Julia's multidimensional algorithms. I think 
Julia has a lot to offer in this domain and is quite unique in its 
multidimensional support. http://julialang.org/blog/2016/02/iteration


On Saturday, April 2, 2016 at 7:55:55 AM UTC-4, Spiritus Pap wrote:
>
> Hi there,
>
> TL;DR: A lot of people that could use julia (researchers currently using 
> python) won't. I give an example of how it makes my life hard, and I try to 
> suggest solutions.
>
> Iv'e been introduced to julia about a month ago.
> I'm an algorithmic researcher, I write mostly research code: statistics, 
> image processing, algorithms, etc.
> I use mostly python with numpy for my stuff, and C/++ when I need 
> performance.
> I was really happy when I heard of julia, because it takes the simplicity 
> of python and combines it with JIT compilation for speed!
>
> I REALLY tried to use julia, I really did. I tried to convince my friends 
> at work to use it too.
> However, there are a few things that make it unusable the way it is.
>
> Decisions that in my opinion were made by people who do not write 
> research-code:
> 1. Indexing in Julia. being 1 based and inclusive, instead of 0 based and 
> not including the end (like in c/python/lots of other stuff)
> 2. No simple integer-division operator.
>
> A simple example why it makes my *life hard*: Assume there is an array of 
> size 100, and i want to take the i_th portion of it out of n. This is a 
> common scenario for research-code, at least for me and my friends.
> In python:
> a[i*100/n:(i+1)*100/n]
> In julia:
> a[1+div(i*100,n):div((i+1)*100,n)]
>
> A lot more cumbersome in julia, and it is annoying and unattractive. This 
> is just a simple example.
>
> *Possible solutions:*
> The reason I'm writing this post is because I want to use julia, and I 
> want to to become great.
> *About the division:* I would suggest *adding *an integer division 
> *operator*, such as *//*. Would help a lot. Yes, I think it should be by 
> default, so that newcomers would need the least amount of effort to use 
> julia comfortably.
>
> *About the indexing:* I realize that this is a decision made a long time 
> ago, and everything is built this way. Yes, it is like matlab, and no, it 
> is not a good thing.
> I am a mathematician, and I almost always index my sequences expressions 
> in 0, it usually just makes more sense.
> The problem is both in array (e.g. a[0]) and in slice (e.g. 0:10).
> An array could be solved perhaps by a *custom *0 based *array object*. 
> But the slice? Maybe adding a 0 based *slice operator*(such as .. or _)? 
> is it possible to do so in a library?
>
> I'd be happy to write these myself, but I believe these need to be in the 
> standard library. Again, so that newcomers would need the least amount of 
> effort to use julia comfortably.
> If you have better suggestions, I'd be happy to hear.
>
> Thank you for your time
>


Re: [julia-users] Re: How to specify a template argument as a super type

2016-04-02 Thread Erik Schnetter
Julia's type hierarchies use single inheritance only. Thus inheritance
mechanisms that work fine in C++ can't always be made to work in
Julia. There are discussions on introducing concepts (traits), but
that's still in a prototype stage. (I'm using traits
 in a project of mine; this works
fine, only the error messages are sometimes confusing.)

If you have `Name1{S <: Name2} <: S`, and you need to stick with
single inheritance, then you essentially define both

`Name1{S} <: Name1
`Name1{S} <: S`

(here `Name1` is the abstract type that Julia introduces for every
parameterized type; `Name1` is different from `Name1{S}`).

Since Julia uses single inheritance, you also need to have either
`Name1 <: S` or `S <: Name1`. Both would be strange, since they would
introduce a relation between `Name1` and `S`, but you haven't
specified `S`.

-erik


On Sat, Apr 2, 2016 at 3:36 PM, Scott Lundberg  wrote:
> Thanks Erik, though I am not quite clear how it would break things.
>
> As for the real goal:
>
> Name2 is also templated such as Name2{A} and Name2{B}. There are lots of
> methods that dispatch depending on Name2{A} or Name2{B}.
>
> I want to add a subtype of Name2 (which I call Name1) that still correctly
> dispatches for those methods. Another way I would like to do it is:
>
> abstract Name1{Name2{T <: AvsB} <: Name2{T}
>
> but of course that also does not work.
>
> Thanks!
>
> On Saturday, April 2, 2016 at 11:55:47 AM UTC-7, Erik Schnetter wrote:
>>
>> If you define an abstract type `Name1{S}`, then Julia automatically
>> defines another abstract type `Name` with `Name1{S} <: Name`.
>>
>> This would break if you tried to insert `S` into the type hierarchy as
>> well.
>>
>> -erik
>>
>> On Sat, Apr 2, 2016 at 2:51 PM, Tomas Lycken  wrote:
>> > If it's abstract, what is the purpose of the type parameter? In other
>> > words,
>> > what is the ultimate goal you're reaching for?
>> >
>> > //T
>> >
>> > On Saturday, April 2, 2016 at 3:17:34 AM UTC+2, Scott Lundberg wrote:
>> >>
>> >> You can define:
>> >>
>> >> abstract Name1{S <: Name2} <: Name3{S}
>> >>
>> >> but I want to define:
>> >>
>> >> abstract Name1{S <: Name2} <: S
>> >>
>> >> However the second line does not work. Is there some other way to
>> >> accomplish the same thing? Thanks!
>>
>>
>>
>> --
>> Erik Schnetter 
>> http://www.perimeterinstitute.ca/personal/eschnetter/



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: How to specify a template argument as a super type

2016-04-02 Thread Scott Lundberg
Thanks Erik, though I am not quite clear how it would break things.

As for the real goal:

Name2 is also templated such as Name2{A} and Name2{B}. There are lots of 
methods that dispatch depending on Name2{A} or Name2{B}.

I want to add a subtype of Name2 (which I call Name1) that still correctly 
dispatches for those methods. Another way I would like to do it is:

abstract Name1{Name2{T <: AvsB} <: Name2{T} 

but of course that also does not work.

Thanks!

On Saturday, April 2, 2016 at 11:55:47 AM UTC-7, Erik Schnetter wrote:
>
> If you define an abstract type `Name1{S}`, then Julia automatically 
> defines another abstract type `Name` with `Name1{S} <: Name`. 
>
> This would break if you tried to insert `S` into the type hierarchy as 
> well. 
>
> -erik 
>
> On Sat, Apr 2, 2016 at 2:51 PM, Tomas Lycken  > wrote: 
> > If it's abstract, what is the purpose of the type parameter? In other 
> words, 
> > what is the ultimate goal you're reaching for? 
> > 
> > //T 
> > 
> > On Saturday, April 2, 2016 at 3:17:34 AM UTC+2, Scott Lundberg wrote: 
> >> 
> >> You can define: 
> >> 
> >> abstract Name1{S <: Name2} <: Name3{S} 
> >> 
> >> but I want to define: 
> >> 
> >> abstract Name1{S <: Name2} <: S 
> >> 
> >> However the second line does not work. Is there some other way to 
> >> accomplish the same thing? Thanks! 
>
>
>
> -- 
> Erik Schnetter > 
> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>


[julia-users] Re: Julia console with inline graphics?

2016-04-02 Thread Oliver Schulz
On Saturday, April 2, 2016 at 2:46:51 PM UTC+2, Josh Day wrote:
>
> I believe that's iTerm being used with 
> https://github.com/Keno/TerminalExtensions.jl 
> .
>  
>  Depending on the complexity of your plots, 
> https://github.com/Evizero/UnicodePlots.jl may be sufficient for you.
>

Of course - something this cool, Keno had to be behind it. ;-) 
Unfortunately (in this respect) I'm on Linux, so I don't have iTerm. Hm, I 
wonder if
there's a Linux terminal emulator that could do it ... xterm has graphics 
capabilities (used by w3m, for example), for example ...



[julia-users] What should Julia do on "Ubuntu compatibility mode in Windows 10"?

2016-04-02 Thread Páll Haraldsson
I guess just run as a regular Linux ELF binary..

Since the news, and Windows Julia works, we can just ignore and run Julia 
in non-Ubuntu mode..

-- 
Palli.



[julia-users] Transpiling Julia[-lite branch] to Java/JVM (or CLR), Go, D

2016-04-02 Thread Páll Haraldsson
Transpiling to JavaScript with EmScripten, is already discussed. If that is 
possible (later), all the other languages/environments should be 
possible/easier.

I know of JavaCall.jl. It seems to have issues.

I have my reasons* for compiling to JVM (and know the point of Julia is 
kind of to avoid VMs..), and see what would potentially break.. e.g. BLAS, 
why I bring up, Julia-lite.

Julia2C is already available.

Should compiling to JVM bytecode be relatively as easy (just standard 
library)?

If this is just crazy talk, then let me know and I will not dig into this.


* JVM has e.g. hard-real-time garbage collection available. Go has 
best-in-class soft-real-time GC for concurrency.

Ccall is a problem (for JVM), would JNI work.

Java and Go do not have multidimensional arrays. A problem? Just map to 
each to a single-dimensional. It's not like memory isn't linear.

Libuv a problem..? Is it needed? Seems just an implementation issue, not a 
need.

UTF-16 in Java/JVM a problem? It's "modified UTF-16".

-- 
Palli.





Re: [julia-users] Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-02 Thread Páll Haraldsson
On Saturday, April 2, 2016 at 12:22:30 PM UTC, Tamas Papp wrote:
>
> On Sat, Apr 02 2016, Spiritus Pap wrote: 
>
> > Hi there, 
> > 
> > TL;DR: A lot of people that could use julia (researchers currently using 
> > python) won't. I give an example of how it makes my life hard, and I try 
> to 
> > suggest solutions. 
>
> While there are surely people who could use Julia but aren't, this 
> applies many languages. IMO people should be choosing languages based on 
> features which reflect deep architectural choices about a language


Such a good post. I was hoping people discover this for themselves.. and I 
just have to point them to Julia..
 

>
> If you are repeating something all the time and find it cumbersome, wrap 
> it in a function.
>

That is just good programming..

Maybe people are afraid it will be slower.

I see @inline in the standard library. Should I be using, more? Does it 
only work for functions that call leaf-functions?
 

> > I am a mathematician, and I almost always index my sequences expressions 
> in 
> > 0, it usually just makes more sense.


I wasn't expecting this from math people.. (except maybe number theoretic).

I accepted 0-based decades ago. Now 1-based (because of Julia).. Maybe it's 
even preferred..

Regrettably, such "trivial" matters seem offputing to, some, [C] people..

Still GC, a deeper isssue, upsets other people (same C/C++ people). The 
manual is very sparse on manual memory management in Julia. Is less of on 
option than in D language. They have a page on it. It seems just as 
optional in both languages.

See https://github.com/JuliaLang/julia/issues/558 and countless 
> discussions about this. 

 

>
> Best, 
>
> Tamas 
>


Re: [julia-users] Re: How to specify a template argument as a super type

2016-04-02 Thread Erik Schnetter
If you define an abstract type `Name1{S}`, then Julia automatically
defines another abstract type `Name` with `Name1{S} <: Name`.

This would break if you tried to insert `S` into the type hierarchy as well.

-erik

On Sat, Apr 2, 2016 at 2:51 PM, Tomas Lycken  wrote:
> If it's abstract, what is the purpose of the type parameter? In other words,
> what is the ultimate goal you're reaching for?
>
> //T
>
> On Saturday, April 2, 2016 at 3:17:34 AM UTC+2, Scott Lundberg wrote:
>>
>> You can define:
>>
>> abstract Name1{S <: Name2} <: Name3{S}
>>
>> but I want to define:
>>
>> abstract Name1{S <: Name2} <: S
>>
>> However the second line does not work. Is there some other way to
>> accomplish the same thing? Thanks!



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Re: How to specify a template argument as a super type

2016-04-02 Thread Tomas Lycken
If it's abstract, what is the purpose of the type parameter? In other 
words, what is the ultimate goal you're reaching for?

//T 

On Saturday, April 2, 2016 at 3:17:34 AM UTC+2, Scott Lundberg wrote:
>
> You can define:
>
> abstract Name1{S <: Name2} <: Name3{S}
>
> but I want to define:
>
> abstract Name1{S <: Name2} <: S
>
> However the second line does not work. Is there some other way to 
> accomplish the same thing? Thanks!
>


Re: [julia-users] string version of expressions not parseable

2016-04-02 Thread Yichao Yu
On Sat, Apr 2, 2016 at 1:02 PM,   wrote:
> I seem to be unable to get this to work
>
>>serialize(STDOUT, :(x + 1))
> =b+1
>
>>deserialize(IOBuffer("=b+1"))
> :call
>
> which is only form 1 out of 4.

deserializing and serializing do no use a plain text format.

julia> io = IOBuffer()
IOBuffer(data=UInt8[...], readable=true, writable=true, seekable=true,
append=false, size=0, maxsize=Inf, ptr=1, mark=-1)

julia> serialize(io, :(x + 1))

julia> seekstart(io)
IOBuffer(data=UInt8[...], readable=true, writable=true, seekable=true,
append=false, size=9, maxsize=Inf, ptr=1, mark=-1)

julia> deserialize(io)
:(x + 1)


>
> Also, could you give me an example of a situation where the aliasing is an
> issue? I'm unclear when that pops up.


[julia-users] Re: Can Julia handle a large number of methods for a function (and a lot of types)?

2016-04-02 Thread Andrew Keller
Hi Oliver,

Regarding your first question posed in this thread, I think you might be 
interested in this documentation 
 of how functions 
will work in Julia 0.5 if you haven't read it already. There is some 
discussion of how methods are dispatched, as well as compiler efficiency 
issues. I hope you don't mind that I've tried out your setindex and 
getindex approach. It is very pleasant to use but I have not benchmarked it 
in any serious way, mainly because I'm not sure what a sensible benchmark 
would be. If you'd like me to try out something I'll see what I can do.

It sounds like you have probably been thinking deeply about instrument 
control for a much longer period of time than I have. I'll write you again 
once I've gotten our codebase more stable and documented, and I welcome 
criticism. I haven't given much thought to coordinated parallel access yet 
but I agree that it will be important. A short summary is that right now we 
have one package containing a module with submodules for each instrument. 
Each instrument has an explicit type and most methods are generated using 
metaprogramming based off some template for each instrument type. Most 
instrument types are subtypes of `InstrumentVISA`, and there are a few 
methods that are assumed to work on all instruments supporting VISA. I must 
say, it is not obvious what the best type hierarchy is, and I could easily 
believe that traits are a better way to go when describing the 
functionality of instruments. You can find any number of discussion 
threads, GitHub issues, etc. on traits in Julia but I don't know what 
current consensus is.

Unitful.jl and SIUnits.jl globally have the same approach: encode the units 
in the type signature of a value. Accordingly, Unitful.jl should have great 
performance at "run-time," and is a design goal. Anytime some operation 
with units is happening in a tight loop, it should be fast. I have only had 
time to look at some of the generated machine code but what I've looked at 
is the same or very close to what is generated for operations without 
units. I have not optimized the "compile-time" performance at all. Probably 
the first time a new unit is encountered, some modest performance penalty 
results. I'd like to relook at how I'm handling dimensions (length, time, 
etc.) because in retrospect I think I'm doing some odd things. An open 
question is how one could dispatch on the dimensions (e.g. x::Length). I 
tried this once and it sort of worked but the type-gymnastics became very 
ugly, so maybe something like traits would be better.

Last I checked, the two packages are different in that Unitful.jl supports: 
rational powers of the units (you can do Hz^(1/2), useful for noise 
spectra); non-SI units like imperial units; the ability to add new units on 
the fly with macros; preservation of exact conversions. My package only 
supports Julia 0.5 though. I think the differences between the packages 
arise mainly because SIUnits.jl was written when Julia was less mature. 
SIUnits.jl is still sufficient for many needs and is remarkably clever.

Thanks for linking to your code. I have no experience with Scala but I will 
take a look at it.

Best,
Andrew


On Saturday, April 2, 2016 at 4:45:04 AM UTC-7, Oliver Schulz wrote:
>
> Hi Andrew,
>
> sorry, I overlooked your post, somehow ...
>
> On Monday, March 28, 2016 at 3:47:42 AM UTC+2, Andrew Keller wrote:
>>
>> Just a heads up that I've been working on something similar for the past 
>> six months or so.
>>
>
> Very nice - I'm open collaborate on this topic, of course. At the moment, 
> we're still actively using my Scala/Akka based system  
> https://github.com/daqcore/daqcore-scala) in our group, but I'm starting 
> to map out it's Julia replacement. daqcore-scala currently implements some 
> VME flash-ADCs and some slow-control devices like a Labjack, some HV 
> Supplies, vaccuum pressure gauges, etc., but the architecture is not device 
> specific. The Julia version is intended to be equally generic, but will 
> initially target the same/similar devices. I'd like to make it more 
> modular, though, it doesn't all have to end up in one package.
>
> One requirement will be coordinated parallel access to the instruments 
> from different places of the code (can't have one Task sending a command to 
> an instrument while another task isn't done with his query - which may span 
> multiple I/O operations - yet). Also multi-host operation for 
> high-throughput applications might become necessary at some point. The 
> basis for both came for free with Scala/Akka actors, and I started 
> Actors.jl (https://github.com/daqcore/Actors.jl) with this in mind. 
> Another things that was easy in Scala was representing (potentially 
> overlapping) devices classes using traits. I'm thinking how do best do this 
> in Julia - but due to Julias dynamic nature, explicit device classes may 
> not be absolutely necessary. Still,

Re: [julia-users] string version of expressions not parseable

2016-04-02 Thread vishesh
I seem to be unable to get this to work

>serialize(STDOUT, :(x + 1))
=b+1

>deserialize(IOBuffer("=b+1"))
:call

which is only form 1 out of 4.

Also, could you give me an example of a situation where the aliasing is an 
issue? I'm unclear when that pops up.


Re: [julia-users] Re: getting a warning when overloading +(a,b)

2016-04-02 Thread Gerson J. Ferreira
Ok. I'm reading again the Modules section of the manual. Now I understand
better the difference between import and using. Thanks.
On Apr 2, 2016 1:27 PM, "Kristoffer Carlsson"  wrote:

> Because you want to overload a function from Base.


[julia-users] Re: getting a warning when overloading +(a,b)

2016-04-02 Thread Kristoffer Carlsson
Because you want to overload a function from Base.

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Yichao Yu
On Sat, Apr 2, 2016 at 12:16 PM, Tim Wheeler  wrote:
> Thank you for the comments. In my original code it means the difference
> between a 30 min execution with memory allocation in the Gigabytes and a few
> seconds of execution with only 800 bytes using the second version.
> I thought under-the-hood Julia basically runs those if statements anyway for
> its dispatch, and don't know why it needs to allocate any memory.
> Having the if-statement workaround will be fine though.

Well, if you have a lot of these cheap functions being dynamically
dispatched I think it is not a good way to use the type. Depending on
your problem, you may be better off using a enum/flags/dict to
represent the type/get the values.

The reason for the allocation is that the return type is unknown. It
should be obvious to see if you check your code with code_warntype.

>
> On Saturday, April 2, 2016 at 7:26:11 AM UTC-7, Cedric St-Jean wrote:
>>
>>
>>> Therefore there's no way the compiler can rewrite the slow version to the
>>> fast version.
>>
>>
>> It knows that the element type is a Feature, so it could produce:
>>
>> if isa(features[i], A)
>> retval += evaluate(features[i]::A)
>> elseif isa(features[i], B)
>> retval += evaluate(features[i]::B)
>> else
>> retval += evaluate(features[i])
>> end
>>
>> and it would make sense for abstract types that have few subtypes. I
>> didn't realize that dispatch was an order of magnitude slower than type
>> checking. It's easy enough to write a macro generating this expansion, too.
>>
>> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote:
>>>
>>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler 
>>> wrote:
>>> > Hello Julia Users.
>>> >
>>> > I ran into a weird slowdown issue and reproduced a minimal working
>>> > example.
>>> > Maybe someone can help shed some light.
>>> >
>>> > abstract Feature
>>> >
>>> > type A <: Feature end
>>> > evaluate(f::A) = 1.0
>>> >
>>> > type B <: Feature end
>>> > evaluate(f::B) = 0.0
>>> >
>>> > function slow(features::Vector{Feature})
>>> > retval = 0.0
>>> > for i in 1 : length(features)
>>> > retval += evaluate(features[i])
>>> > end
>>> > retval
>>> > end
>>> >
>>> > function fast(features::Vector{Feature})
>>> > retval = 0.0
>>> > for i in 1 : length(features)
>>> > if isa(features[i], A)
>>> > retval += evaluate(features[i]::A)
>>> > else
>>> > retval += evaluate(features[i]::B)
>>> > end
>>> > end
>>> > retval
>>> > end
>>> >
>>> > using ProfileView
>>> >
>>> > features = Feature[]
>>> > for i in 1 : 1
>>> > push!(features, A())
>>> > end
>>> >
>>> > slow(features)
>>> > @time slow(features)
>>> > fast(features)
>>> > @time fast(features)
>>> >
>>> > The output is:
>>> >
>>> > 0.000136 seconds (10.15 k allocations: 166.417 KB)
>>> > 0.12 seconds (5 allocations: 176 bytes)
>>> >
>>> >
>>> > This is a HUGE difference! Am I missing something big? Is there a good
>>> > way
>>> > to inspect code to figure out where I am going wrong?
>>>
>>> This is because of type instability as you will find in the performance
>>> tips.
>>> Note that slow and fast are not equivalent since the fast version only
>>> accept `A` or `B` but the slow version accepts any subtype of feature
>>> that you may ever define. Therefore there's no way the compiler can
>>> rewrite the slow version to the fast version.
>>> There are optimizations that can be applied to bring down the gap but
>>> there'll always be a large difference between the two.
>>>
>>> >
>>> >
>>> > Thank you in advance for any guidance.
>>> >
>>> >
>>> > -Tim
>>> >
>>> >
>>> >
>>> >
>>> >


Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Tim Wheeler
Thank you for the comments. In my original code it means the difference 
between a 30 min execution with memory allocation in the Gigabytes and a 
few seconds of execution with only 800 bytes using the second version.
I thought under-the-hood Julia basically runs those if statements anyway 
for its dispatch, and don't know why it needs to allocate any memory.
Having the if-statement workaround will be fine though. 

On Saturday, April 2, 2016 at 7:26:11 AM UTC-7, Cedric St-Jean wrote:
>
>
> Therefore there's no way the compiler can rewrite the slow version to the 
>> fast version. 
>
>
> It knows that the element type is a Feature, so it could produce:
>
> if isa(features[i], A)
> retval += evaluate(features[i]::A)
> elseif isa(features[i], B)
> retval += evaluate(features[i]::B)
> else
> retval += evaluate(features[i])
> end
>
> and it would make sense for abstract types that have few subtypes. I 
> didn't realize that dispatch was an order of magnitude slower than type 
> checking. It's easy enough to write a macro generating this expansion, too.
>
> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote:
>>
>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler  
>> wrote: 
>> > Hello Julia Users. 
>> > 
>> > I ran into a weird slowdown issue and reproduced a minimal working 
>> example. 
>> > Maybe someone can help shed some light. 
>> > 
>> > abstract Feature 
>> > 
>> > type A <: Feature end 
>> > evaluate(f::A) = 1.0 
>> > 
>> > type B <: Feature end 
>> > evaluate(f::B) = 0.0 
>> > 
>> > function slow(features::Vector{Feature}) 
>> > retval = 0.0 
>> > for i in 1 : length(features) 
>> > retval += evaluate(features[i]) 
>> > end 
>> > retval 
>> > end 
>> > 
>> > function fast(features::Vector{Feature}) 
>> > retval = 0.0 
>> > for i in 1 : length(features) 
>> > if isa(features[i], A) 
>> > retval += evaluate(features[i]::A) 
>> > else 
>> > retval += evaluate(features[i]::B) 
>> > end 
>> > end 
>> > retval 
>> > end 
>> > 
>> > using ProfileView 
>> > 
>> > features = Feature[] 
>> > for i in 1 : 1 
>> > push!(features, A()) 
>> > end 
>> > 
>> > slow(features) 
>> > @time slow(features) 
>> > fast(features) 
>> > @time fast(features) 
>> > 
>> > The output is: 
>> > 
>> > 0.000136 seconds (10.15 k allocations: 166.417 KB) 
>> > 0.12 seconds (5 allocations: 176 bytes) 
>> > 
>> > 
>> > This is a HUGE difference! Am I missing something big? Is there a good 
>> way 
>> > to inspect code to figure out where I am going wrong? 
>>
>> This is because of type instability as you will find in the performance 
>> tips. 
>> Note that slow and fast are not equivalent since the fast version only 
>> accept `A` or `B` but the slow version accepts any subtype of feature 
>> that you may ever define. Therefore there's no way the compiler can 
>> rewrite the slow version to the fast version. 
>> There are optimizations that can be applied to bring down the gap but 
>> there'll always be a large difference between the two. 
>>
>> > 
>> > 
>> > Thank you in advance for any guidance. 
>> > 
>> > 
>> > -Tim 
>> > 
>> > 
>> > 
>> > 
>> > 
>>
>

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Yichao Yu
On Sat, Apr 2, 2016 at 10:26 AM, Cedric St-Jean  wrote:
>
>> Therefore there's no way the compiler can rewrite the slow version to the
>> fast version.
>
>
> It knows that the element type is a Feature, so it could produce:
>
> if isa(features[i], A)
> retval += evaluate(features[i]::A)
> elseif isa(features[i], B)
> retval += evaluate(features[i]::B)
> else
> retval += evaluate(features[i])
> end

This is kind of the optimization I mentioned but no this will still be
much slower than the other version.
The compiler has no idea what the return type of the third one so this
version is still type unstable and you get dynamic dispatch at every
iteration for the floating point add. Of course there's more
sophisticated transformation that can keep you in the fast path as
long as possible and create extra code to check and handle the slow
cases but it will still be slower.

I also recommand Jeff's talk[1] for a better explaination of the general idea.

[1] https://www.youtube.com/watch?v=cjzcYM9YhwA

>
> and it would make sense for abstract types that have few subtypes. I didn't
> realize that dispatch was an order of magnitude slower than type checking.
> It's easy enough to write a macro generating this expansion, too.
>
> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote:
>>
>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler  wrote:
>> > Hello Julia Users.
>> >
>> > I ran into a weird slowdown issue and reproduced a minimal working
>> > example.
>> > Maybe someone can help shed some light.
>> >
>> > abstract Feature
>> >
>> > type A <: Feature end
>> > evaluate(f::A) = 1.0
>> >
>> > type B <: Feature end
>> > evaluate(f::B) = 0.0
>> >
>> > function slow(features::Vector{Feature})
>> > retval = 0.0
>> > for i in 1 : length(features)
>> > retval += evaluate(features[i])
>> > end
>> > retval
>> > end
>> >
>> > function fast(features::Vector{Feature})
>> > retval = 0.0
>> > for i in 1 : length(features)
>> > if isa(features[i], A)
>> > retval += evaluate(features[i]::A)
>> > else
>> > retval += evaluate(features[i]::B)
>> > end
>> > end
>> > retval
>> > end
>> >
>> > using ProfileView
>> >
>> > features = Feature[]
>> > for i in 1 : 1
>> > push!(features, A())
>> > end
>> >
>> > slow(features)
>> > @time slow(features)
>> > fast(features)
>> > @time fast(features)
>> >
>> > The output is:
>> >
>> > 0.000136 seconds (10.15 k allocations: 166.417 KB)
>> > 0.12 seconds (5 allocations: 176 bytes)
>> >
>> >
>> > This is a HUGE difference! Am I missing something big? Is there a good
>> > way
>> > to inspect code to figure out where I am going wrong?
>>
>> This is because of type instability as you will find in the performance
>> tips.
>> Note that slow and fast are not equivalent since the fast version only
>> accept `A` or `B` but the slow version accepts any subtype of feature
>> that you may ever define. Therefore there's no way the compiler can
>> rewrite the slow version to the fast version.
>> There are optimizations that can be applied to bring down the gap but
>> there'll always be a large difference between the two.
>>
>> >
>> >
>> > Thank you in advance for any guidance.
>> >
>> >
>> > -Tim
>> >
>> >
>> >
>> >
>> >


Re: [julia-users] Ubuntu, do I have to install libfftw to use conv?

2016-04-02 Thread Gerson J. Ferreira
wow! When was the PPA link deleted from the Download page in Julia's
website? I've installed Julia in the class room via PPA in February.

I'll try the generic binaries Monday there. In my personal computer I have
all libs installed, so I cannot test it here unless I uninstall them. I
don't want to do that.



--
Gerson J. Ferreira
Prof. Dr. @ InFis - UFU
--
Nanosciences group 
Institute of Physics
Federal University of Uberlândia, Brazil
--

On Sat, Apr 2, 2016 at 11:51 AM, Milan Bouchet-Valat 
wrote:

> Le samedi 02 avril 2016 à 07:46 -0700, Gerson J. Ferreira a écrit :
> > In my personal computer I have many things installed and I was using
> > the convolution command (conv and conv2) with no problems.
> >
> > But yesterday I was teaching and I have no sudo access on the class
> > room. The students were trying to use conv and getting an error
> > saying that FFTW was missing.
> >
> > I'll only have sudo access there to test in on Monday. But my guess
> > is that I need to install libfftw3, correct?
> >
> > Is this a bug? Since conv is part of the base I guess it should have
> > libfftw3 as a dependency on Ubuntu, right?
> How did you install Julia? FWIW, using the PPA is no longer recommended
> as it is unmaintained. Does it work if you download the generic binary
> tarball for 0.4.5?
>
>
> Regards
>


[julia-users] Re: "LoadError: UndefVarError: xgboost not defined" while using XGBoost package

2016-04-02 Thread Cedric St-Jean
I tried it and got an error while `using`. This solved some of the issues;

Pkg.checkout("XGBoost")

But trying to use it on random data yields an error as it cannot find some 
function in the C library. You can try posting an issue. FYI there's also 
GradientBoost.jl , as well as a 
gradient-boosting algo in scikit-learn 

 
accessible through ScikitLearn.jl.

On Saturday, April 2, 2016 at 7:55:53 AM UTC-4, Aditya Sharma wrote:
>
> While trying to build an Xgboost model, I am getting this error. How to 
> make this work?
>
> # train xgboost
> num_round = 100
> using XGBoost
> bst = xgboost(train_X, num_round, label=train_Y, eta=1, max_depth=2)
>
>
> LoadError: UndefVarError: xgboost not defined
> while loading In[33], in expression starting on line 4
>
>

Re: [julia-users] Ubuntu, do I have to install libfftw to use conv?

2016-04-02 Thread Milan Bouchet-Valat
Le samedi 02 avril 2016 à 07:46 -0700, Gerson J. Ferreira a écrit :
> In my personal computer I have many things installed and I was using
> the convolution command (conv and conv2) with no problems.
> 
> But yesterday I was teaching and I have no sudo access on the class
> room. The students were trying to use conv and getting an error
> saying that FFTW was missing. 
> 
> I'll only have sudo access there to test in on Monday. But my guess
> is that I need to install libfftw3, correct? 
> 
> Is this a bug? Since conv is part of the base I guess it should have
> libfftw3 as a dependency on Ubuntu, right?
How did you install Julia? FWIW, using the PPA is no longer recommended
as it is unmaintained. Does it work if you download the generic binary
tarball for 0.4.5?


Regards


[julia-users] Ubuntu, do I have to install libfftw to use conv?

2016-04-02 Thread Gerson J. Ferreira
In my personal computer I have many things installed and I was using the 
convolution command (conv and conv2) with no problems.

But yesterday I was teaching and I have no sudo access on the class room. 
The students were trying to use conv and getting an error saying that FFTW 
was missing. 

I'll only have sudo access there to test in on Monday. But my guess is that 
I need to install libfftw3, correct? 

Is this a bug? Since conv is part of the base I guess it should have 
libfftw3 as a dependency on Ubuntu, right?


[julia-users] Re: getting a warning when overloading +(a,b)

2016-04-02 Thread Gerson J. Ferreira
Thanks! But why do I need to import Base.+? 

Em sexta-feira, 1 de abril de 2016 12:02:13 UTC-3, Giuseppe Ragusa escreveu:
>
> ```
> import Base.+
> type numerr
> num
> err
> end
>
> +(a::numerr, b::numerr) = numerr(a.num + b.num, sqrt(a.err^2 + b.err^2));
> +(a::Any, b::numerr) = numerr(a + b.num, b.err);
> +(a::numerr, b::Any) = numerr(a.num + b, a.err);
>
> x = numerr(10, 1);
> y = numerr(20, 2);
>
> println(x+y)
> println(2+x)
> println(y+2)
> ```
>
>
>
> On Friday, April 1, 2016 at 4:51:53 PM UTC+2, Gerson J. Ferreira wrote:
>>
>> I'm trying to overload simple math operators to try a code for error 
>> propagation, but I'm getting a warning. Here's a short code that already 
>> shows the warning message:
>>
>> type numerr
>> num
>> err
>> end
>>
>> +(a::numerr, b::numerr) = numerr(a.num + b.num, sqrt(a.err^2 + b.err^2));
>> +(a::Any, b::numerr) = numerr(a + b.num, b.err);
>> +(a::numerr, b::Any) = numerr(a.num + b, a.err);
>>
>> x = numerr(10, 1);
>> y = numerr(20, 2);
>>
>> println(x+y)
>> println(2+x)
>> println(y+2)
>>
>> I didn't see much about operator overloading in Julia's manual. I would 
>> really appreciate if someone could point me in the right direction.
>>
>> The code above returns this warning in Julia 0.4.2:
>>
>>_   _ _(_)_ |  A fresh approach to technical computing
>>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>>_ _   _| |_  __ _   |  Type "?help" for help.
>>   | | | | | | |/ _` |  |
>>   | | |_| | | | (_| |  |  Version 0.4.2 (2015-12-06 21:47 UTC)
>>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org release
>> |__/   |  x86_64-linux-gnu
>>
>> julia> include("overload.jl")
>> WARNING: module Main should explicitly import + from Base
>> numerr(30,2.23606797749979)
>> numerr(12,1)
>> numerr(22,2)
>>
>>
>>

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread Cedric St-Jean


> Therefore there's no way the compiler can rewrite the slow version to the 
> fast version. 


It knows that the element type is a Feature, so it could produce:

if isa(features[i], A)
retval += evaluate(features[i]::A)
elseif isa(features[i], B)
retval += evaluate(features[i]::B)
else
retval += evaluate(features[i])
end

and it would make sense for abstract types that have few subtypes. I didn't 
realize that dispatch was an order of magnitude slower than type checking. 
It's easy enough to write a macro generating this expansion, too.

On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote:
>
> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler  > wrote: 
> > Hello Julia Users. 
> > 
> > I ran into a weird slowdown issue and reproduced a minimal working 
> example. 
> > Maybe someone can help shed some light. 
> > 
> > abstract Feature 
> > 
> > type A <: Feature end 
> > evaluate(f::A) = 1.0 
> > 
> > type B <: Feature end 
> > evaluate(f::B) = 0.0 
> > 
> > function slow(features::Vector{Feature}) 
> > retval = 0.0 
> > for i in 1 : length(features) 
> > retval += evaluate(features[i]) 
> > end 
> > retval 
> > end 
> > 
> > function fast(features::Vector{Feature}) 
> > retval = 0.0 
> > for i in 1 : length(features) 
> > if isa(features[i], A) 
> > retval += evaluate(features[i]::A) 
> > else 
> > retval += evaluate(features[i]::B) 
> > end 
> > end 
> > retval 
> > end 
> > 
> > using ProfileView 
> > 
> > features = Feature[] 
> > for i in 1 : 1 
> > push!(features, A()) 
> > end 
> > 
> > slow(features) 
> > @time slow(features) 
> > fast(features) 
> > @time fast(features) 
> > 
> > The output is: 
> > 
> > 0.000136 seconds (10.15 k allocations: 166.417 KB) 
> > 0.12 seconds (5 allocations: 176 bytes) 
> > 
> > 
> > This is a HUGE difference! Am I missing something big? Is there a good 
> way 
> > to inspect code to figure out where I am going wrong? 
>
> This is because of type instability as you will find in the performance 
> tips. 
> Note that slow and fast are not equivalent since the fast version only 
> accept `A` or `B` but the slow version accepts any subtype of feature 
> that you may ever define. Therefore there's no way the compiler can 
> rewrite the slow version to the fast version. 
> There are optimizations that can be applied to bring down the gap but 
> there'll always be a large difference between the two. 
>
> > 
> > 
> > Thank you in advance for any guidance. 
> > 
> > 
> > -Tim 
> > 
> > 
> > 
> > 
> > 
>


[julia-users] Re: Julia console with inline graphics?

2016-04-02 Thread Tony Fong
If you can prepare your graphics as png/jpg/etc. and you are using iterm, 
you can just look at how iterm handle inline graphics 
here. https://www.iterm2.com/images.html

Specifically, look at how the utility script imgcat works.

Tony

On Saturday, April 2, 2016 at 8:46:51 AM UTC-4, Josh Day wrote:
>
> I believe that's iTerm being used with 
> https://github.com/Keno/TerminalExtensions.jl.  Depending on the 
> complexity of your plots, https://github.com/Evizero/UnicodePlots.jl may 
> be sufficient for you.
>
> On Saturday, April 2, 2016 at 6:45:22 AM UTC-4, Oliver Schulz wrote:
>>
>> Hi,
>>
>> I'm looking for a Julia console with inline graphics (e.g. to display 
>> Gadfly plots). There's Jupyter/IJulia, of course, but I saw a picture of 
>> something more console-like in the AxisArrays readme (at the end of 
>> https://github.com/mbauman/AxisArrays.jl#example-of-currently-implemented-behavior)
>>  
>> - does anyone know what's been used there?
>>
>> Cheers,
>>
>> Oliver
>>
>>

Re: [julia-users] Dict comprehension not type stable unless wrapped in a function

2016-04-02 Thread Tamas Papp
On Sat, Apr 02 2016, Yichao Yu wrote:

> On Sat, Apr 2, 2016 at 8:35 AM, Tamas Papp  wrote:
>> On Sat, Apr 02 2016, Yichao Yu wrote:
>>
>>> On Sat, Apr 2, 2016 at 2:28 AM, Tamas Papp  wrote:
 On Fri, Apr 01 2016, Yichao Yu wrote:

> On Fri, Apr 1, 2016 at 5:33 AM, Tamas Papp  wrote:
>> Hi,
>>
>> I ran into a problem with the result type of Dict comprehensions. If I
>> wrap the comprehension in a function, it produces the narrowest type,
>> otherwise the type of value is Any. Code looks like this:
>
> https://github.com/JuliaLang/julia/issues/7258

 Sorry, I don't understand why it is the same issue. The type of
 arguments in the comprehension is always the same, just that one version
 is wrapped in a function inside a function, and the other one isn't.
>>>
>>> And that's exactly where type inference sensitivity comes in. One of
>>> them is inferable and the other not.
>>
>> Thanks, now I think I get it. So for the time being, is wrapping
>> something in a function to make it inferable a reasonable general
>> workaround?
>>
>
> Depend on what you need. If it works than sure.
> Supplying the type manually also works, if you can easily determine it
> of course.

Rereading #7258 I convinced myself that I should supply the type
manually. Two questions:

1. Is Dict{keytype,valuetype}([...my comprehension...]) the preferred
syntax, that will survive to 0.5?

2. If I am computing the type of A/B and I know that their types are
TypeA and TypeB, what's the preferred way of computing the result type?
I came up with

typeof(one(TypeA)/one(TypeB))

but maybe there is something more elegant.

Best,

Tamas


Re: [julia-users] Dict comprehension not type stable unless wrapped in a function

2016-04-02 Thread Yichao Yu
On Sat, Apr 2, 2016 at 8:35 AM, Tamas Papp  wrote:
> On Sat, Apr 02 2016, Yichao Yu wrote:
>
>> On Sat, Apr 2, 2016 at 2:28 AM, Tamas Papp  wrote:
>>> On Fri, Apr 01 2016, Yichao Yu wrote:
>>>
 On Fri, Apr 1, 2016 at 5:33 AM, Tamas Papp  wrote:
> Hi,
>
> I ran into a problem with the result type of Dict comprehensions. If I
> wrap the comprehension in a function, it produces the narrowest type,
> otherwise the type of value is Any. Code looks like this:

 https://github.com/JuliaLang/julia/issues/7258
>>>
>>> Sorry, I don't understand why it is the same issue. The type of
>>> arguments in the comprehension is always the same, just that one version
>>> is wrapped in a function inside a function, and the other one isn't.
>>
>> And that's exactly where type inference sensitivity comes in. One of
>> them is inferable and the other not.
>
> Thanks, now I think I get it. So for the time being, is wrapping
> something in a function to make it inferable a reasonable general
> workaround?
>

Depend on what you need. If it works than sure.
Supplying the type manually also works, if you can easily determine it
of course.

> Best,
>
> Tamas


Re: [julia-users] Fast coroutines

2016-04-02 Thread Yichao Yu
On Sat, Apr 2, 2016 at 8:33 AM, Erik Schnetter  wrote:
> Can you put a number on the task creating and task switching overhead?

The most obvious expensive part is the `jl_setjmp` and `jl_longjmp`.
Not really sure how long those takes.
There are also other overhead like finding another task to run.

For this particular example, the task version is doing

1. Box a value
2. Switch task
3. Return the boxed value
4. Since it is not type stable (I don't think type inference can
handle tasks), a dynamic dispatch and another boxing for the result
5. Switch task again.

Every single one above are way more expensive than the integer addition.

@Carl,

If you want to do this in this style, I believe the right abstraction
is generator (i.e. iterator). The support is added on master and
there's work on making it faster.

>
> For example, if everything runs on a single core, task switching could
> (theoretically) happen within 100 ns on a modern CPU. Whether that is
> the case depends on the tradeoffs and choices made during design and
> implementation, hence the question.
>
> -erik
>
> On Sat, Apr 2, 2016 at 8:28 AM, Yichao Yu  wrote:
>> On Fri, Apr 1, 2016 at 3:45 PM, Carl  wrote:
>>> Hello,
>>>
>>> Julia is great.  I'm new fan and trying to figure out how to write simple
>>> coroutines that are fast.  So far I have this:
>>>
>>>
>>> function task_iter(vec)
>>> @task begin
>>> i = 1
>>> for v in vec
>>> produce(v)
>>> end
>>> end
>>> end
>>>
>>> function task_sum(vec)
>>> s = 0.0
>>> for val in task_iter(vec)
>>> s += val
>>> end
>>> return s
>>> end
>>>
>>> function normal_sum(vec)
>>> s = 0.0
>>> for val in vec
>>> s += val
>>> end
>>> return s
>>> end
>>>
>>> values = rand(10^6)
>>> task_sum(values)
>>> normal_sum(values)
>>>
>>> @time task_sum(values)
>>> @time normal_sum(values)
>>>
>>>
>>>
>>>   1.067081 seconds (2.00 M allocations: 30.535 MB, 1.95% gc time)
>>>   0.006656 seconds (5 allocations: 176 bytes)
>>>
>>> I was hoping to be able to get the speeds to match (as close as possible).
>>> I've read the performance tips and I can't find anything I'm doing wrong.  I
>>> also tried out 0.5 thinking that maybe it would be faster with supporting
>>> fast anonymous functions but it was slower (1.5 seconds).
>>
>> Tasks are expensive and are basically designed for IO.
>> ~1000x slow down for this simple stuff is expected.
>>
>>>
>>>
>>> Carl
>>>
>
>
>
> --
> Erik Schnetter 
> http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Re: Julia console with inline graphics?

2016-04-02 Thread Josh Day
I believe that's iTerm being used 
with https://github.com/Keno/TerminalExtensions.jl.  Depending on the 
complexity of your plots, https://github.com/Evizero/UnicodePlots.jl may be 
sufficient for you.

On Saturday, April 2, 2016 at 6:45:22 AM UTC-4, Oliver Schulz wrote:
>
> Hi,
>
> I'm looking for a Julia console with inline graphics (e.g. to display 
> Gadfly plots). There's Jupyter/IJulia, of course, but I saw a picture of 
> something more console-like in the AxisArrays readme (at the end of 
> https://github.com/mbauman/AxisArrays.jl#example-of-currently-implemented-behavior)
>  
> - does anyone know what's been used there?
>
> Cheers,
>
> Oliver
>
>

Re: [julia-users] Dict comprehension not type stable unless wrapped in a function

2016-04-02 Thread Tamas Papp
On Sat, Apr 02 2016, Yichao Yu wrote:

> On Sat, Apr 2, 2016 at 2:28 AM, Tamas Papp  wrote:
>> On Fri, Apr 01 2016, Yichao Yu wrote:
>>
>>> On Fri, Apr 1, 2016 at 5:33 AM, Tamas Papp  wrote:
 Hi,

 I ran into a problem with the result type of Dict comprehensions. If I
 wrap the comprehension in a function, it produces the narrowest type,
 otherwise the type of value is Any. Code looks like this:
>>>
>>> https://github.com/JuliaLang/julia/issues/7258
>>
>> Sorry, I don't understand why it is the same issue. The type of
>> arguments in the comprehension is always the same, just that one version
>> is wrapped in a function inside a function, and the other one isn't.
>
> And that's exactly where type inference sensitivity comes in. One of
> them is inferable and the other not.

Thanks, now I think I get it. So for the time being, is wrapping
something in a function to make it inferable a reasonable general
workaround?

Best,

Tamas


[julia-users] Re: Fast coroutines

2016-04-02 Thread Dan
Hello Carl and welcome to the Julia community (and fan club).

Regarding your question:
1. `normal_sum` version is the one of the most optimized construct you can 
imagine. So it is good it's working so fast, even relative with a less 
common Task construct.
2. `values` seems to be a global variable (and not const) which is a source 
of trouble for Julia type-inference based optimization (it is mentioned in 
the Performance Tips). So try to wrap up things in a function or make them 
const (but make sure they are not global which cannot be relied to keep 
their types).

Dan

On Saturday, April 2, 2016 at 2:55:51 PM UTC+3, Carl wrote:
>
> Hello,
>
> Julia is great.  I'm new fan and trying to figure out how to write simple 
> coroutines that are fast.  So far I have this:
>
>
> function task_iter(vec)
> @task begin
> i = 1
> for v in vec
> produce(v)
> end
> end
> end
>
> function task_sum(vec)
> s = 0.0
> for val in task_iter(vec)
> s += val
> end
> return s
> end
>
> function normal_sum(vec)
> s = 0.0
> for val in vec
> s += val
> end
> return s
> end
>
> values = rand(10^6)
> task_sum(values)
> normal_sum(values)
>
> @time task_sum(values)
> @time normal_sum(values)
>
>
>
>   1.067081 seconds (2.00 M allocations: 30.535 MB, 1.95% gc time)
>   0.006656 seconds (5 allocations: 176 bytes)
>
> I was hoping to be able to get the speeds to match (as close as possible). 
>  I've read the performance tips and I can't find anything I'm doing wrong. 
>  I also tried out 0.5 thinking that maybe it would be faster with 
> supporting fast anonymous functions but it was slower (1.5 seconds).
>
>
> Carl
>
>

Re: [julia-users] Fast coroutines

2016-04-02 Thread Erik Schnetter
Can you put a number on the task creating and task switching overhead?

For example, if everything runs on a single core, task switching could
(theoretically) happen within 100 ns on a modern CPU. Whether that is
the case depends on the tradeoffs and choices made during design and
implementation, hence the question.

-erik

On Sat, Apr 2, 2016 at 8:28 AM, Yichao Yu  wrote:
> On Fri, Apr 1, 2016 at 3:45 PM, Carl  wrote:
>> Hello,
>>
>> Julia is great.  I'm new fan and trying to figure out how to write simple
>> coroutines that are fast.  So far I have this:
>>
>>
>> function task_iter(vec)
>> @task begin
>> i = 1
>> for v in vec
>> produce(v)
>> end
>> end
>> end
>>
>> function task_sum(vec)
>> s = 0.0
>> for val in task_iter(vec)
>> s += val
>> end
>> return s
>> end
>>
>> function normal_sum(vec)
>> s = 0.0
>> for val in vec
>> s += val
>> end
>> return s
>> end
>>
>> values = rand(10^6)
>> task_sum(values)
>> normal_sum(values)
>>
>> @time task_sum(values)
>> @time normal_sum(values)
>>
>>
>>
>>   1.067081 seconds (2.00 M allocations: 30.535 MB, 1.95% gc time)
>>   0.006656 seconds (5 allocations: 176 bytes)
>>
>> I was hoping to be able to get the speeds to match (as close as possible).
>> I've read the performance tips and I can't find anything I'm doing wrong.  I
>> also tried out 0.5 thinking that maybe it would be faster with supporting
>> fast anonymous functions but it was slower (1.5 seconds).
>
> Tasks are expensive and are basically designed for IO.
> ~1000x slow down for this simple stuff is expected.
>
>>
>>
>> Carl
>>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Fast coroutines

2016-04-02 Thread Yichao Yu
On Fri, Apr 1, 2016 at 3:45 PM, Carl  wrote:
> Hello,
>
> Julia is great.  I'm new fan and trying to figure out how to write simple
> coroutines that are fast.  So far I have this:
>
>
> function task_iter(vec)
> @task begin
> i = 1
> for v in vec
> produce(v)
> end
> end
> end
>
> function task_sum(vec)
> s = 0.0
> for val in task_iter(vec)
> s += val
> end
> return s
> end
>
> function normal_sum(vec)
> s = 0.0
> for val in vec
> s += val
> end
> return s
> end
>
> values = rand(10^6)
> task_sum(values)
> normal_sum(values)
>
> @time task_sum(values)
> @time normal_sum(values)
>
>
>
>   1.067081 seconds (2.00 M allocations: 30.535 MB, 1.95% gc time)
>   0.006656 seconds (5 allocations: 176 bytes)
>
> I was hoping to be able to get the speeds to match (as close as possible).
> I've read the performance tips and I can't find anything I'm doing wrong.  I
> also tried out 0.5 thinking that maybe it would be faster with supporting
> fast anonymous functions but it was slower (1.5 seconds).

Tasks are expensive and are basically designed for IO.
~1000x slow down for this simple stuff is expected.

>
>
> Carl
>


[julia-users] Re: Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-02 Thread Sisyphuss
Didn't numpy's `transpose()` function make your life hard?


Re: [julia-users] Dict comprehension not type stable unless wrapped in a function

2016-04-02 Thread Yichao Yu
On Sat, Apr 2, 2016 at 2:28 AM, Tamas Papp  wrote:
> On Fri, Apr 01 2016, Yichao Yu wrote:
>
>> On Fri, Apr 1, 2016 at 5:33 AM, Tamas Papp  wrote:
>>> Hi,
>>>
>>> I ran into a problem with the result type of Dict comprehensions. If I
>>> wrap the comprehension in a function, it produces the narrowest type,
>>> otherwise the type of value is Any. Code looks like this:
>>
>> https://github.com/JuliaLang/julia/issues/7258
>
> Sorry, I don't understand why it is the same issue. The type of
> arguments in the comprehension is always the same, just that one version
> is wrapped in a function inside a function, and the other one isn't.

And that's exactly where type inference sensitivity comes in. One of
them is inferable and the other not.

>
> Best,
>
> Tamas


Re: [julia-users] Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-02 Thread Tamas Papp
On Sat, Apr 02 2016, Spiritus Pap wrote:

> Hi there,
>
> TL;DR: A lot of people that could use julia (researchers currently using 
> python) won't. I give an example of how it makes my life hard, and I try to 
> suggest solutions.

While there are surely people who could use Julia but aren't, this
applies many languages. IMO people should be chosing languages based on
features which reflect deep architectural choices about a language: for
Julia these could be parametric types, multimethods, a combination of
the latter, macros, numerical performance, etc.

The problem with trivial features such as which operator has an infix
form in Base and choice of indexing is that there are so many choices,
with reasonable arguments for many options, so by definition it is
impossible to please everyone. 

> I REALLY tried to use julia, I really did. I tried to convince my friends 
> at work to use it too.
> However, there are a few things that make it unusable the way it is.

If such trivial features are dealbreakers for you, I would assume that
you have not invested enough time in learning about more important
features of Julia. In any case, "unusable" is a very strong word, and
would surprise many people who are already using the language
productively.

> Decisions that in my opinion were made by people who do not write 
> research-code:
> 1. Indexing in Julia. being 1 based and inclusive, instead of 0 based and 
> not including the end (like in c/python/lots of other stuff)
> 2. No simple integer-division operator.
>
> A simple example why it makes my *life hard*: Assume there is an array of 
> size 100, and i want to take the i_th portion of it out of n. This is a 
> common scenario for research-code, at least for me and my friends.
> In python:
> a[i*100/n:(i+1)*100/n]
> In julia:
> a[1+div(i*100,n):div((i+1)*100,n)]

Define

portion(i,n,m=100) = 1+div(i*m,n):div((i+1)*m,n)

and use

a[portion(i,n)]

and your code will become much cleaner.

> A lot more cumbersome in julia, and it is annoying and unattractive. This 
> is just a simple example.

If you are repeating something all the time and find it cumbersome, wrap
it in a function.

> *About the division:* I would suggest *adding *an integer division 
> *operator*, such as *//*. Would help a lot. Yes, I think it should be by 
> default, so that newcomers would need the least amount of effort to use 
> julia comfortably.

// is already being used in Base for rational division.

> *About the indexing:* I realize that this is a decision made a long time 
> ago, and everything is built this way. Yes, it is like matlab, and no, it 
> is not a good thing.
> I am a mathematician, and I almost always index my sequences expressions in 
> 0, it usually just makes more sense.

See https://github.com/JuliaLang/julia/issues/558 and countless
discussions about this. 

> The problem is both in array (e.g. a[0]) and in slice (e.g. 0:10).
> An array could be solved perhaps by a *custom *0 based *array object*. But 
> the slice? Maybe adding a 0 based *slice operator*(such as .. or _)? is it 
> possible to do so in a library?

Yes, see eg

https://github.com/alsam/OffsetArrays.jl

> I'd be happy to write these myself, but I believe these need to be in the 
> standard library. Again, so that newcomers would need the least amount of 
> effort to use julia comfortably.

The tendency is to go in the opposite direction: instead of stuffing
everything into Base, move code into packages.

Best,

Tamas


[julia-users] Re: Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-02 Thread Tony Kelman
There's a unicode integer division operator. \div then hit tab, you get ÷. 
Many editors have plugins for similar latex-tab-completion to make unicode 
math symbols easier to write.

Nearly anything can be implemented as a library in Julia. See 
https://github.com/alsam/OffsetArrays.jl, 
https://github.com/eschnett/FlexibleArrays.jl, and several others. There's 
a currently very active discussion on github about how to make Julia's 
standard library accommodate custom array types as best as it can. See 
http://julialang.org/blog/2016/03/arrays-iteration as well.


On Saturday, April 2, 2016 at 4:55:55 AM UTC-7, Spiritus Pap wrote:
>
> Hi there,
>
> TL;DR: A lot of people that could use julia (researchers currently using 
> python) won't. I give an example of how it makes my life hard, and I try to 
> suggest solutions.
>
> Iv'e been introduced to julia about a month ago.
> I'm an algorithmic researcher, I write mostly research code: statistics, 
> image processing, algorithms, etc.
> I use mostly python with numpy for my stuff, and C/++ when I need 
> performance.
> I was really happy when I heard of julia, because it takes the simplicity 
> of python and combines it with JIT compilation for speed!
>
> I REALLY tried to use julia, I really did. I tried to convince my friends 
> at work to use it too.
> However, there are a few things that make it unusable the way it is.
>
> Decisions that in my opinion were made by people who do not write 
> research-code:
> 1. Indexing in Julia. being 1 based and inclusive, instead of 0 based and 
> not including the end (like in c/python/lots of other stuff)
> 2. No simple integer-division operator.
>
> A simple example why it makes my *life hard*: Assume there is an array of 
> size 100, and i want to take the i_th portion of it out of n. This is a 
> common scenario for research-code, at least for me and my friends.
> In python:
> a[i*100/n:(i+1)*100/n]
> In julia:
> a[1+div(i*100,n):div((i+1)*100,n)]
>
> A lot more cumbersome in julia, and it is annoying and unattractive. This 
> is just a simple example.
>
> *Possible solutions:*
> The reason I'm writing this post is because I want to use julia, and I 
> want to to become great.
> *About the division:* I would suggest *adding *an integer division 
> *operator*, such as *//*. Would help a lot. Yes, I think it should be by 
> default, so that newcomers would need the least amount of effort to use 
> julia comfortably.
>
> *About the indexing:* I realize that this is a decision made a long time 
> ago, and everything is built this way. Yes, it is like matlab, and no, it 
> is not a good thing.
> I am a mathematician, and I almost always index my sequences expressions 
> in 0, it usually just makes more sense.
> The problem is both in array (e.g. a[0]) and in slice (e.g. 0:10).
> An array could be solved perhaps by a *custom *0 based *array object*. 
> But the slice? Maybe adding a 0 based *slice operator*(such as .. or _)? 
> is it possible to do so in a library?
>
> I'd be happy to write these myself, but I believe these need to be in the 
> standard library. Again, so that newcomers would need the least amount of 
> effort to use julia comfortably.
> If you have better suggestions, I'd be happy to hear.
>
> Thank you for your time
>


[julia-users] Julia is a great idea, but it disqualifies itself for a big portion of its potential consumers

2016-04-02 Thread Spiritus Pap
Hi there,

TL;DR: A lot of people that could use julia (researchers currently using 
python) won't. I give an example of how it makes my life hard, and I try to 
suggest solutions.

Iv'e been introduced to julia about a month ago.
I'm an algorithmic researcher, I write mostly research code: statistics, 
image processing, algorithms, etc.
I use mostly python with numpy for my stuff, and C/++ when I need 
performance.
I was really happy when I heard of julia, because it takes the simplicity 
of python and combines it with JIT compilation for speed!

I REALLY tried to use julia, I really did. I tried to convince my friends 
at work to use it too.
However, there are a few things that make it unusable the way it is.

Decisions that in my opinion were made by people who do not write 
research-code:
1. Indexing in Julia. being 1 based and inclusive, instead of 0 based and 
not including the end (like in c/python/lots of other stuff)
2. No simple integer-division operator.

A simple example why it makes my *life hard*: Assume there is an array of 
size 100, and i want to take the i_th portion of it out of n. This is a 
common scenario for research-code, at least for me and my friends.
In python:
a[i*100/n:(i+1)*100/n]
In julia:
a[1+div(i*100,n):div((i+1)*100,n)]

A lot more cumbersome in julia, and it is annoying and unattractive. This 
is just a simple example.

*Possible solutions:*
The reason I'm writing this post is because I want to use julia, and I want 
to to become great.
*About the division:* I would suggest *adding *an integer division 
*operator*, such as *//*. Would help a lot. Yes, I think it should be by 
default, so that newcomers would need the least amount of effort to use 
julia comfortably.

*About the indexing:* I realize that this is a decision made a long time 
ago, and everything is built this way. Yes, it is like matlab, and no, it 
is not a good thing.
I am a mathematician, and I almost always index my sequences expressions in 
0, it usually just makes more sense.
The problem is both in array (e.g. a[0]) and in slice (e.g. 0:10).
An array could be solved perhaps by a *custom *0 based *array object*. But 
the slice? Maybe adding a 0 based *slice operator*(such as .. or _)? is it 
possible to do so in a library?

I'd be happy to write these myself, but I believe these need to be in the 
standard library. Again, so that newcomers would need the least amount of 
effort to use julia comfortably.
If you have better suggestions, I'd be happy to hear.

Thank you for your time


[julia-users] "LoadError: UndefVarError: xgboost not defined" while using XGBoost package

2016-04-02 Thread Aditya Sharma
While trying to build an Xgboost model, I am getting this error. How to 
make this work?

# train xgboost
num_round = 100
using XGBoost
bst = xgboost(train_X, num_round, label=train_Y, eta=1, max_depth=2)


LoadError: UndefVarError: xgboost not defined
while loading In[33], in expression starting on line 4



[julia-users] Fast coroutines

2016-04-02 Thread Carl
Hello,

Julia is great.  I'm new fan and trying to figure out how to write simple 
coroutines that are fast.  So far I have this:


function task_iter(vec)
@task begin
i = 1
for v in vec
produce(v)
end
end
end

function task_sum(vec)
s = 0.0
for val in task_iter(vec)
s += val
end
return s
end

function normal_sum(vec)
s = 0.0
for val in vec
s += val
end
return s
end

values = rand(10^6)
task_sum(values)
normal_sum(values)

@time task_sum(values)
@time normal_sum(values)



  1.067081 seconds (2.00 M allocations: 30.535 MB, 1.95% gc time)
  0.006656 seconds (5 allocations: 176 bytes)

I was hoping to be able to get the speeds to match (as close as possible). 
 I've read the performance tips and I can't find anything I'm doing wrong. 
 I also tried out 0.5 thinking that maybe it would be faster with 
supporting fast anonymous functions but it was slower (1.5 seconds).


Carl



[julia-users] Re: Can Julia handle a large number of methods for a function (and a lot of types)?

2016-04-02 Thread Oliver Schulz
Hi Andrew,

sorry, I overlooked your post, somehow ...

On Monday, March 28, 2016 at 3:47:42 AM UTC+2, Andrew Keller wrote:
>
> Just a heads up that I've been working on something similar for the past 
> six months or so.
>

Very nice - I'm open collaborate on this topic, of course. At the moment, 
we're still actively using my Scala/Akka based system 
 https://github.com/daqcore/daqcore-scala) in our group, but I'm starting 
to map out it's Julia replacement. daqcore-scala currently implements some 
VME flash-ADCs and some slow-control devices like a Labjack, some HV 
Supplies, vaccuum pressure gauges, etc., but the architecture is not device 
specific. The Julia version is intended to be equally generic, but will 
initially target the same/similar devices. I'd like to make it more 
modular, though, it doesn't all have to end up in one package.

One requirement will be coordinated parallel access to the instruments from 
different places of the code (can't have one Task sending a command to an 
instrument while another task isn't done with his query - which may span 
multiple I/O operations - yet). Also multi-host operation for 
high-throughput applications might become necessary at some point. The 
basis for both came for free with Scala/Akka actors, and I started 
Actors.jl (https://github.com/daqcore/Actors.jl) with this in mind. Another 
things that was easy in Scala was representing (potentially overlapping) 
devices classes using traits. I'm thinking how do best do this in Julia - 
but due to Julias dynamic nature, explicit device classes may not be 
absolutely necessary. Still, I'd like to used as much explicit typing as 
possible: When dealing with hardware, compile-time checks are very much 
preferable to run-time checks with live devices (can't afford to implement 
a full simulator for each device). :-)
 

As I've been working on this since I started learning Julia, there are a 
> number of things I did early on that I regret and am now fixing. One thing 
> you'll find becomes annoying quickly is namespace issues: if you want to 
> put different instruments in different modules, shared properties or 
> functions
>

Yes, that was one of the reasons I wanted to explore getindex/setindex, it 
avoids namespace conflicts on the functions. Of course this shifts the 
issue to the device feature classes, but I hope that a central package can 
host a set that will cover most devices (and be open to people contributing 
additional common features). Highly device-specific features can live 
within the namespace of the package implementing the device in quesiton,
and shouldn't be exported.
 

For instance, on a vector network analyzer you could write something like:
>
> vna[Frequency] = 4GHz:1MHz:10GHz
>

Yep, this is exactly what I had in mind. Of course there will also have to 
be a producer/consumer mechanism for devices with high-rate/continuous 
output (e.g. fast waveform digitizers). Channels and/or actors are an 
attractive approach.
 

and thereby set frequency start, stop, and step at once. I have a units 
> package for Julia v0.5 that I'm also developing when I have the time (
> Unitful.jl )
>
 
I've seen Unitful.jl - how does it compare to SIUnits.jl, especially 
performance-wise? Are they orthogonal or two approaches to the same problem?


Cheers,

Oliver



[julia-users] Julia console with inline graphics?

2016-04-02 Thread Oliver Schulz
Hi,

I'm looking for a Julia console with inline graphics (e.g. to display 
Gadfly plots). There's Jupyter/IJulia, of course, but I saw a picture of 
something more console-like in the AxisArrays readme (at the end of 
https://github.com/mbauman/AxisArrays.jl#example-of-currently-implemented-behavior)
 
- does anyone know what's been used there?

Cheers,

Oliver



Re: [julia-users] Defining part of a function outside its module

2016-04-02 Thread jock . lawrie
Great, thanks Mauro, most helpful.


On Friday, April 1, 2016 at 4:09:30 PM UTC+11, Mauro wrote:
>
> Maybe like so: 
>
> julia> module Myfunc 
>   export f 
>   f(x) = g(x) * g(x) 
>   function g end 
>end 
>
> julia> using Myfunc 
>
> julia> f(4) 
> ERROR: MethodError: `g` has no method matching g(::Int64) 
>  in f at none:3 
>
> julia> Myfunc.g(x) = 2x 
> g (generic function with 1 method) 
>
> julia> f(4) 
> 64 
>
> On Fri, 2016-04-01 at 05:39, jock@gmail.com  wrote: 
> > Hi all, 
> > 
> > This works: 
> > f(x) = g(x) * g(x) 
> > g(x) = x + 1 
> > f(2)# 9 
> > 
> > That is, f(x) isn't fully defined until g is defined. 
> > 
> > But if f(x) is in a module it doesn't work: 
> > module myfunc 
> >export f 
> >f(x) = g(x) * g(x) 
> > end 
> > 
> > using myfunc 
> > g(x) = x + 1 
> > f(2)   # ERROR: UndefVarError: g not defined 
> > 
> > That is, myfunc.f(x) cannot access global g(x) if g(x) is defined after 
> > myfunc (or even before). Is it possible to enable myfunc.f to "see" g? 
> > Any other ways to achieve the same thing? 
> > 
> > Cheers, 
> > Jock 
>