[julia-users] Re: Function runs slow when called on new/different arguments?

2016-10-26 Thread Andrew
What's "slow?"

foo() is cached but @time isn't. Are you sure you're not timing the @time 
macro on your 3rd run?

On Wednesday, October 26, 2016 at 4:16:55 PM UTC-4, Michael Wooley wrote:
>
> Hi, I'm trying to speed up some code and have found something curious that 
> I can't quite understand from the "performance tips" section. 
>
> I understand that I shouldn't time my code on the first run because that 
> will include compilation time and so forth. 
>
> The odd thing that I can't understand is that there seems to be a lot of 
> overhead every time I pass an new argument to my function. So, e.g.,
>
> # Create two instances of type test
> test1 = test()
> test2 = test()
> # Run to get in cache - slow (as expected)
> foo!(test1)
> # Run again - fast (as expected)
> foo!(test1)
> # Run on test2   - Slow (not expected. should be fast because foo!() in 
> cache?)
> @time foo!(test2)
> # Run on test2 again - fast (??)
> @time foo!(test2)
>
> I have used @code_warntype to try to get rid of type instability in my 
> code to no avail. 
>
> I know that the main bottleneck in my code is a triple-nested loop over 
> large arrays.
>
> I recognize that my pseudo-example is kind of vague but thought I'd try 
> this first in case this sort of behavior is indicative of a basic issue 
> that I've overlooked. I can provide my full code if that would be helpful. 
> Thanks in advance! 
>


Re: [julia-users] Re: Any 0.5 performance tips?

2016-10-14 Thread Andrew
I've found the main problem. I have a function which repeatedly accesses a 
6-dimensional array in a loop. This function took no time in 0.4, but is 
actually very slow in 0.5. My problem looks very similar to 18774 
<https://github.com/JuliaLang/julia/issues/18774>. 
Here's an example:

A3 = rand(10, 10, 10);
function test3(A, nx1, nx2, nx3)
  for i = 1:10_000_000
A[nx1, nx2, nx3]
  end
end

A5 = rand(10, 10, 10, 10, 10);
function test5(A, nx1, nx2, nx3, nx4, nx5)
  for i = 1:10_000_000
A[nx1, nx2, nx3, nx4, nx5]
  end
end

A6 = rand(10, 10, 10, 10, 10, 10);
function test6(A, nx1, nx2, nx3, nx4, nx5, nx6)
  for i = 1:10_000_000
A[nx1, nx2, nx3, nx4, nx5, nx6]
  end
end
function test6_fast(A, nx1, nx2, nx3, nx4, nx5, nx6)
  Asize = size(A)
  for i = 1:10_000_000
A[sub2ind(Asize, nx1, nx2, nx3, nx4, nx5, nx6 )]
  end
end
@time test3(A3, 1, 1, 1)
@time test5(A5, 1, 1, 1, 1, 1)
@time test6(A6, 1, 1, 1, 1, 1, 1)
@time test6_fast(A6, 1, 1, 1, 1, 1, 1)


test6 takes 0.01s in 0.4 and takes 15s in 0.5. Using a linear index fixes 
the problem.

On Friday, September 30, 2016 at 2:30:02 AM UTC-4, Mauro wrote:
>
> On Fri, 2016-09-30 at 03:45, Andrew > 
> wrote: 
> > I checked, and my objective function is evaluated exactly as many times 
> > under 0.4 as it is under 0.5. The number of iterations must be the same. 
> > 
> > I also looked at the times more precisely. For one particular function 
> call 
> > in the code, I have: 
> > 
> > 0.4 with old code: 6.7s 18.5M allocations 
> > 0.4 with 0.5 style code(regular anonymous functions) 11.6s, 141M 
> > allocations 
> > 0.5: 36.2s, 189M allocations 
> > 
> > Surprisingly, 0.4 is still much faster even without the fast anonymous 
> > functions trick. It doesn't look like 0.5 is generating many more 
> > allocations than 0.4 on the same code, the time is just a lot slower. 
>
> Sounds like your not far off a minimal, working example.  Post it and 
> I'm sure it will be dissected in no time. (And an issue can be filed). 
>
> > On Thursday, September 29, 2016 at 3:36:46 PM UTC-4, Tim Holy wrote: 
> >> 
> >> No real clue about what's happening, but my immediate thought was that 
> if 
> >> your algorithm is iterative and uses some kind of threshold to decide 
> >> convergence, then it seems possible that a change in the accuracy of 
> some 
> >> computation might lead to it getting "stuck" occasionally due to 
> roundoff 
> >> error. That's probably more likely to happen because of some kind of 
> >> worsening rather than some improvement, but either is conceivable. 
> >> 
> >> If that's even a possible explanation, I'd check for unusually-large 
> >> numbers of iterations and then print some kind of convergence info. 
> >> 
> >> Best, 
> >> --Tim 
> >> 
> >> On Thu, Sep 29, 2016 at 1:21 PM, Andrew  > 
> >> wrote: 
> >> 
> >>> In the 0.4 version the above times are pretty consistent. I never 
> observe 
> >>> any several thousand allocation calls. I wonder if compilation is 
> occurring 
> >>> repeatedly. 
> >>> 
> >>> This isn't terribly pressing for me since I'm not currently working on 
> >>> this project, but if there's an easy fix it would be useful for future 
> work. 
> >>> 
> >>> (sorry I didn't mean to post twice. For some reason hitting spacebar 
> was 
> >>> interpreted as the post command?) 
> >>> 
> >>> 
> >>> On Thursday, September 29, 2016 at 2:15:35 PM UTC-4, Andrew wrote: 
> >>>> 
> >>>> I've used @code_warntype everywhere I can think to and I've only 
> found 
> >>>> one Core.box. The @code_warntype looks like this 
> >>>> 
> >>>> Variables: 
> >>>>   #self#::#innerloop#3133{#bellman_obj} 
> >>>>   state::State{IdioState,AggState} 
> >>>>   EVspline::Dierckx.Spline1D 
> >>>>   model::Model{CRRA_Family,AggState} 
> >>>>   policy::PolicyFunctions{Array{Float64,6},Array{Int64,6}} 
> >>>>   OO::NW 
> >>>> 
> >>>> 
> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>  
>
> >>>> 
> >>>> Body: 
> >>>>   begin 
> >>>> 
> >>>> 
> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},

[julia-users] Re: I have two data collection for mapreduce() vs. generator, which is correct/better implementation? (Julia 0.5)

2016-10-11 Thread Andrew
I don't know much about @simd. I see it pop up when people use loops made 
up of very simple arithmetic operations, but I don't know if map can take 
advantage of it for your more complicated function.

On Friday, October 7, 2016 at 4:29:20 AM UTC-4, Martin Florek wrote:
>
> Thanks Andrew for answer. 
> I also have experience that eachindex() is slightly faster. In Performance 
> tips I found macros e.g. @simd. Do you have any experience with them?
>
> On Thursday, 6 October 2016 16:13:22 UTC+2, Martin Florek wrote:
>>
>>
>> Hi All,
>>
>> I'm new in Julia and I need to decide about the more correct/better 
>> implementation for two data collection. I have implemented mean absolute 
>> percentage error (MAPE) for *Generator Expressions* (Comprehensions 
>> without brackets):
>>
>> a = rand(10_000_000)
>> p = rand(10_000_000)
>>
>> err(actual, predicted) = (actual - predicted) / actual
>>
>> f(a, p) = 100 * sumabs(err(a[i], p[i]) for i in eachindex(a)) /length(a)
>>
>> a with *mapreduce()* function.
>>
>> function mapre(a, p)
>> s = mapreduce(t -> begin b,c=t; abs((b - c) / b) end, +, zip(a, p))
>> s * 100/length(a)
>> end
>>
>> When compare *@time f(a,p) *I get:
>>
>> 0.026515 seconds (11 allocations: 304 bytes) 797.1301337918511
>>
>> and *@time mapre(a, p):*
>>
>> 0.079932 seconds (9 allocations: 272 bytes) 797.1301337918511
>>
>>
>> Thanks in advance,
>> Martin
>>
>

[julia-users] Re: I have two data collection for mapreduce() vs. generator, which is correct/better implementation? (Julia 0.5)

2016-10-06 Thread Andrew
I'm not getting any speed difference.
julia> @time f(a, p)
  0.051503 seconds (11 allocations: 304 bytes)
801.6933350167617

julia> @time mapre(a, p)
  0.059369 seconds (9 allocations: 272 bytes)
801.6933350167617


More seriously, I did:
julia> using BenchmarkTools

julia> @benchmark f(a, p)
BenchmarkTools.Trial: 
  samples:  98
  evals/sample: 1
  time tolerance:   5.00%
  memory tolerance: 1.00%
  memory estimate:  144.00 bytes
  allocs estimate:  7
  minimum time: 51.16 ms (0.00% GC)
  median time:  51.21 ms (0.00% GC)
  mean time:51.24 ms (0.00% GC)
  maximum time: 51.57 ms (0.00% GC)

julia> @benchmark mapre(a, p)
BenchmarkTools.Trial: 
  samples:  92
  evals/sample: 1
  time tolerance:   5.00%
  memory tolerance: 1.00%
  memory estimate:  112.00 bytes
  allocs estimate:  5
  minimum time: 54.47 ms (0.00% GC)
  median time:  54.57 ms (0.00% GC)
  mean time:54.64 ms (0.00% GC)
  maximum time: 55.72 ms (0.00% GC)



Personally I think the generator looks nicer, and it's slightly faster.
I like this syntax the best:
f2(a, p) = 100 * sumabs(err(ai, pi) for (ai, pi) in zip(a, p)) /length(a)
though @benchmark has this slightly slower than the eachindex() version. No 
idea why.


On Thursday, October 6, 2016 at 10:13:22 AM UTC-4, Martin Florek wrote:
>
>
> Hi All,
>
> I'm new in Julia and I need to decide about the more correct/better 
> implementation for two data collection. I have implemented mean absolute 
> percentage error (MAPE) for *Generator Expressions* (Comprehensions 
> without brackets):
>
> a = rand(10_000_000)
> p = rand(10_000_000)
>
> err(actual, predicted) = (actual - predicted) / actual
>
> f(a, p) = 100 * sumabs(err(a[i], p[i]) for i in eachindex(a)) /length(a)
>
> a with *mapreduce()* function.
>
> function mapre(a, p)
> s = mapreduce(t -> begin b,c=t; abs((b - c) / b) end, +, zip(a, p))
> s * 100/length(a)
> end
>
> When compare *@time f(a,p) *I get:
>
> 0.026515 seconds (11 allocations: 304 bytes) 797.1301337918511
>
> and *@time mapre(a, p):*
>
> 0.079932 seconds (9 allocations: 272 bytes) 797.1301337918511
>
>
> Thanks in advance,
> Martin
>


[julia-users] Re: memory allocation in nested loops

2016-10-04 Thread Andrew
You can do view(r, :, i_2) in 0.5. I think slice does the same thing in 0.4.

Also as a general point, you would have realized you hadn't defined r in 
the function if you had wrapped your entire program in a function, like
function test()
(your code)
end
test()
This would throw an error with your original code since r would not be in 
scope in nested_loop!().

I occasionally have accidentally used outside global variables in my 
functions like you did, and this can lead to bizarre and difficult to find 
bugs.

On Tuesday, October 4, 2016 at 8:45:53 AM UTC-4, Niccolo' Antonello wrote:
>
> what is the best way to slice an array without allocating? I thank you!
>
> On Tuesday, October 4, 2016 at 1:20:29 PM UTC+2, Kristoffer Carlsson wrote:
>>
>> The slicing of "r" will also make a new array and thus allocate. 
>
>

Re: [julia-users] Re: Any 0.5 performance tips?

2016-09-29 Thread Andrew
I checked, and my objective function is evaluated exactly as many times 
under 0.4 as it is under 0.5. The number of iterations must be the same.

I also looked at the times more precisely. For one particular function call 
in the code, I have:

0.4 with old code: 6.7s 18.5M allocations
0.4 with 0.5 style code(regular anonymous functions) 11.6s, 141M 
allocations 
0.5: 36.2s, 189M allocations

Surprisingly, 0.4 is still much faster even without the fast anonymous 
functions trick. It doesn't look like 0.5 is generating many more 
allocations than 0.4 on the same code, the time is just a lot slower.

On Thursday, September 29, 2016 at 3:36:46 PM UTC-4, Tim Holy wrote:
>
> No real clue about what's happening, but my immediate thought was that if 
> your algorithm is iterative and uses some kind of threshold to decide 
> convergence, then it seems possible that a change in the accuracy of some 
> computation might lead to it getting "stuck" occasionally due to roundoff 
> error. That's probably more likely to happen because of some kind of 
> worsening rather than some improvement, but either is conceivable.
>
> If that's even a possible explanation, I'd check for unusually-large 
> numbers of iterations and then print some kind of convergence info.
>
> Best,
> --Tim
>
> On Thu, Sep 29, 2016 at 1:21 PM, Andrew > 
> wrote:
>
>> In the 0.4 version the above times are pretty consistent. I never observe 
>> any several thousand allocation calls. I wonder if compilation is occurring 
>> repeatedly. 
>>
>> This isn't terribly pressing for me since I'm not currently working on 
>> this project, but if there's an easy fix it would be useful for future work.
>>
>> (sorry I didn't mean to post twice. For some reason hitting spacebar was 
>> interpreted as the post command?)
>>
>>
>> On Thursday, September 29, 2016 at 2:15:35 PM UTC-4, Andrew wrote:
>>>
>>> I've used @code_warntype everywhere I can think to and I've only found 
>>> one Core.box. The @code_warntype looks like this
>>>
>>> Variables:
>>>   #self#::#innerloop#3133{#bellman_obj}
>>>   state::State{IdioState,AggState}
>>>   EVspline::Dierckx.Spline1D
>>>   model::Model{CRRA_Family,AggState}
>>>   policy::PolicyFunctions{Array{Float64,6},Array{Int64,6}}
>>>   OO::NW
>>>   
>>> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>>>
>>> Body:
>>>   begin 
>>>   
>>> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>>>  
>>> = $(Expr(:new, 
>>> ##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj},
>>>  
>>> :(state), :(EVspline), :(model), :(policy), :(OO), 
>>> :((Core.getfield)(#self#,:bellman_obj)::#bellman_obj)))
>>>   SSAValue(0) = 
>>> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>>>   
>>> (Core.setfield!)((Core.getfield)(#self#::#innerloop#3133{#bellman_obj},:obj)::CORE.BOX,:contents,SSAValue(0))::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>>>   return SSAValue(0)
>>>   
>>> end::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>>>
>>>
>>> I put the CORE.BOX in all caps near the bottom.
>>>
>>> I have no idea if this is actually a problem. The return type is stable. 
>>> Also, this function appears in an outer loop.
>>>
>>> What I noticed putting a @time in places is that in 0.5, occasionally 
>>> calls to my nonlinear equation solver take a really long time, like here:
>>>
>>>   0.069224 seconds (9.62 k allocations: 487.873 KB)
>>>   0.07 seconds (39 allocations: 1.922 KB)
>>>   0.06 seconds (29 allocations: 1.391 KB)
>>>   0.11 seconds (74 allocations: 3.781 KB)
>>>   0.09 seconds (54 allocations: 2.719 KB)
>>>   0.08 seconds (54 allocations: 2.719 KB)
>>>   0.08 seconds (49 allocations: 2.453 KB)
>>>   0.07 seconds (44 allocations: 2.188 KB)
>>>   0.0

[julia-users] Re: Any 0.5 performance tips?

2016-09-29 Thread Andrew
In the 0.4 version the above times are pretty consistent. I never observe 
any several thousand allocation calls. I wonder if compilation is occurring 
repeatedly. 

This isn't terribly pressing for me since I'm not currently working on this 
project, but if there's an easy fix it would be useful for future work.

(sorry I didn't mean to post twice. For some reason hitting spacebar was 
interpreted as the post command?)

On Thursday, September 29, 2016 at 2:15:35 PM UTC-4, Andrew wrote:
>
> I've used @code_warntype everywhere I can think to and I've only found one 
> Core.box. The @code_warntype looks like this
>
> Variables:
>   #self#::#innerloop#3133{#bellman_obj}
>   state::State{IdioState,AggState}
>   EVspline::Dierckx.Spline1D
>   model::Model{CRRA_Family,AggState}
>   policy::PolicyFunctions{Array{Float64,6},Array{Int64,6}}
>   OO::NW
>   
> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>
> Body:
>   begin 
>   
> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>  
> = $(Expr(:new, 
> ##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj},
>  
> :(state), :(EVspline), :(model), :(policy), :(OO), 
> :((Core.getfield)(#self#,:bellman_obj)::#bellman_obj)))
>   SSAValue(0) = 
> #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>   
> (Core.setfield!)((Core.getfield)(#self#::#innerloop#3133{#bellman_obj},:obj)::CORE.BOX,:contents,SSAValue(0))::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>   return SSAValue(0)
>   
> end::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>
>
> I put the CORE.BOX in all caps near the bottom.
>
> I have no idea if this is actually a problem. The return type is stable. 
> Also, this function appears in an outer loop.
>
> What I noticed putting a @time in places is that in 0.5, occasionally 
> calls to my nonlinear equation solver take a really long time, like here:
>
>   0.069224 seconds (9.62 k allocations: 487.873 KB)
>   0.07 seconds (39 allocations: 1.922 KB)
>   0.06 seconds (29 allocations: 1.391 KB)
>   0.11 seconds (74 allocations: 3.781 KB)
>   0.09 seconds (54 allocations: 2.719 KB)
>   0.08 seconds (54 allocations: 2.719 KB)
>   0.08 seconds (49 allocations: 2.453 KB)
>   0.07 seconds (44 allocations: 2.188 KB)
>   0.07 seconds (44 allocations: 2.188 KB)
>   0.06 seconds (39 allocations: 1.922 KB)
>   0.07 seconds (39 allocations: 1.922 KB)
>   0.06 seconds (39 allocations: 1.922 KB)
>   0.05 seconds (34 allocations: 1.656 KB)
>   0.05 seconds (34 allocations: 1.656 KB)
>   0.04 seconds (29 allocations: 1.391 KB)
>   0.04 seconds (24 allocations: 1.125 KB)
>   0.007399 seconds (248 allocations: 15.453 KB)
>   0.09 seconds (30 allocations: 1.594 KB)
>   0.04 seconds (25 allocations: 1.328 KB)
>   0.04 seconds (25 allocations: 1.328 KB)
>
>   0.10 seconds (70 allocations: 3.719 KB)
>   0.072703 seconds (41.74 k allocations: 1.615 MB)
>
>
>
>
>
>
>
>
> On Thursday, September 29, 2016 at 1:37:18 AM UTC-4, Kristoffer Carlsson 
> wrote:
>>
>> Look for Core.Box in @code_warntype. See 
>> https://github.com/JuliaLang/julia/issues/15276
>
>

[julia-users] Re: Any 0.5 performance tips?

2016-09-29 Thread Andrew
I've used @code_warntype everywhere I can think to and I've only found one 
Core.box. The @code_warntype looks like this

Variables:
  #self#::#innerloop#3133{#bellman_obj}
  state::State{IdioState,AggState}
  EVspline::Dierckx.Spline1D
  model::Model{CRRA_Family,AggState}
  policy::PolicyFunctions{Array{Float64,6},Array{Int64,6}}
  OO::NW
  
#3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}

Body:
  begin 
  
#3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
 
= $(Expr(:new, 
##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj},
 
:(state), :(EVspline), :(model), :(policy), :(OO), 
:((Core.getfield)(#self#,:bellman_obj)::#bellman_obj)))
  SSAValue(0) = 
#3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
  
(Core.setfield!)((Core.getfield)(#self#::#innerloop#3133{#bellman_obj},:obj)::CORE.BOX,:contents,SSAValue(0))::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
  return SSAValue(0)
  
end::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}


I put the CORE.BOX in all caps near the bottom.

I have no idea if this is actually a problem. The return type is stable. 
Also, this function appears in an outer loop.

What I noticed putting a @time in places is that in 0.5, occasionally calls 
to my nonlinear equation solver take a really long time, like here:

  0.069224 seconds (9.62 k allocations: 487.873 KB)
  0.07 seconds (39 allocations: 1.922 KB)
  0.06 seconds (29 allocations: 1.391 KB)
  0.11 seconds (74 allocations: 3.781 KB)
  0.09 seconds (54 allocations: 2.719 KB)
  0.08 seconds (54 allocations: 2.719 KB)
  0.08 seconds (49 allocations: 2.453 KB)
  0.07 seconds (44 allocations: 2.188 KB)
  0.07 seconds (44 allocations: 2.188 KB)
  0.06 seconds (39 allocations: 1.922 KB)
  0.07 seconds (39 allocations: 1.922 KB)
  0.06 seconds (39 allocations: 1.922 KB)
  0.05 seconds (34 allocations: 1.656 KB)
  0.05 seconds (34 allocations: 1.656 KB)
  0.04 seconds (29 allocations: 1.391 KB)
  0.04 seconds (24 allocations: 1.125 KB)
  0.007399 seconds (248 allocations: 15.453 KB)
  0.09 seconds (30 allocations: 1.594 KB)
  0.04 seconds (25 allocations: 1.328 KB)
  0.04 seconds (25 allocations: 1.328 KB)

  0.10 seconds (70 allocations: 3.719 KB)
  0.072703 seconds (41.74 k allocations: 1.615 MB)








On Thursday, September 29, 2016 at 1:37:18 AM UTC-4, Kristoffer Carlsson 
wrote:
>
> Look for Core.Box in @code_warntype. See 
> https://github.com/JuliaLang/julia/issues/15276



[julia-users] Any 0.5 performance tips?

2016-09-28 Thread Andrew
My large project is much (3-4x?) slower under 0.5. I know there are a 
variety of open issues about this to be hopefully fixed in the 0.5.x 
timeframe, but are there any general workarounds at the moment?

My project includes the following in case it's relevant:

   - Many nested functions forming closures, which I pass to optimization 
   and equation solving functions. In 0.4 I used the Base.call trick on custom 
   types to make performant closures. I rewrote the code for 0.5 to just use 
   regular anonymous functions since they are fast now.
   - ForwardDiff, splines (Dierckx mostly)
   - Large custom immutable types which carry parameters. These get passed 
   around. I have been considering just making all my parameters global 
   constants rather than passing them around. It seems that this could get 
   them inlined into my functions and save time. However, all my simple tests 
   show global constants perform exactly the same as explicitly passing the 
   parameters, so as long as this still holds in big codes this shouldn't 
   matter.
   - A lot of nested function calls. I prefer to write lots of small 
   functions instead of one big one



[julia-users] Re: Why does Julia 0.5 keep complaining about method re-definitions?

2016-09-27 Thread Andrew
It seems like a lot of people are complaining about this. Is there some way 
to suppress method overwritten warnings for an include() statement? Perhaps 
a keyword like include("foo.jl", quietly = true)?

On Tuesday, September 27, 2016 at 1:56:27 PM UTC-4, Daniel Carrera wrote:
>
> Hello,
>
> I'm not sure when I upgraded, but I am using Julia 0.5 and now it 
> complains every time I redefine a method, which is basically all the time. 
> When I'm developing ideas I usually have a file with a script that I modify 
> and reload all the time:
>
> julia> include("foo.jl");
>
> ... see the results, edit file ...
>
> julia> include("foo.jl");
>
> ... see the results, edit file ...
> julia> include("foo.jl");
>
> ... see the results, edit file ...
>
>
> And so on. This is what I do most of the time. But now every time I 
> `include("foo.jl")` I get warnings for every method that has been redefined 
> (which is all of them):
>
> julia> include("foo.jl");
>
> WARNING: Method definition (::Type{Main.Line})(Float64, Float64) in module 
> Main at /home/daniel/Data/Science/Thesis/SI.jl:4 overwritten at 
> /home/daniel/Data/Science/Thesis/SI.jl:4.
> WARNING: Method definition (::Type{Main.Line})(Any, Any) in module Main at 
> /home/daniel/Data/Science/Thesis/SI.jl:4 overwritten at 
> /home/daniel/Data/Science/Thesis/SI.jl:4.
> WARNING: Method definition new_line(Any, Any, Any) in module Main at 
> /home/daniel/Data/Science/Thesis/SI.jl:8 overwritten at 
> /home/daniel/Data/Science/Thesis/SI.jl:8.
>
>
> Is there a way that this can be fixed? How can I recover Julia's earlier 
> behaviour? This is very irritating, and I don't think it makes sense for a 
> functional language like Julia. If I wrote a method as a variable 
> assignment (e.g. "foo = x -> 2*x") Julia wouldn't complain.
>
>
> Thanks for the help,
> Daniel.
>


[julia-users] Re: How does promote_op work?

2016-09-23 Thread Andrew Keller
Does the promote_op mechanism in v0.5 play nicely with generated functions? 
In Unitful.jl, I use a generated function to determine result units after 
computations involving quantities with units. I seem to get errors 
(@inferred tests fail) if I remove my promote_op specialization. Perhaps my 
problems are all a consequence 
of https://github.com/JuliaLang/julia/issues/18465 and they will go away 
soon...?

On Friday, September 23, 2016 at 5:54:03 AM UTC-7, Pablo Zubieta wrote:
>
> In julia 0.5 the following should work without needing doing anything to 
> promote_op
>
> import Base.+
> immutable Foo end
> +(a::Foo, b::Foo) =1.0
> Array{Foo}(0) + Array{Foo}(0))
>
> promote_op is supposed to be an internal method that you wouldn't need to 
> override. If it is not working i because the operation you are doing is 
> most likely not type stable. So instead of specializing it you could try to 
> remove any type instabilities in the method definitions over your types.
>
> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>>
>>
>> The subject says it all: it looks like one can override promote_op to 
>> support the following behaviour:
>>
>> *julia> **import Base.+*
>>
>>
>> *julia> **immutable Foo end*
>>
>> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
>> REPL[5]:1 overwritten at REPL[10]:1.
>>
>>
>> *julia> **+(a::Foo,b::Foo) = 1.0*
>>
>> *+ (generic function with 164 methods)*
>>
>>
>> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = Float64*
>>
>>
>> *julia> **Array(Foo,0) + Array(Foo,0)*
>>
>> *0-element Array{Float64,1}*
>>
>>
>> Is this documented somewhere?  What if we want to override /, -, etc., is 
>> the solution to write a promote_op for each case?
>>
>

[julia-users] Re: Unitful.jl for physical units

2016-09-06 Thread Andrew Keller
Unitful.jl has recently undergone a major update with enhanced 
functionality. If the previous version was an alpha release, this is 
something like a beta release before the package is registered and tagged. 
I intend to do that soon so that others can think about using the package 
more seriously, probably shortly after Julia 0.5.0 is released. Now is a 
good time to provide feedback if you find bugs or think there are some 
trouble spots that would prevent the package from being useful to you. 

I am sorry to inconvenience existing users with syntax changes, but please 
raise an issue or post here if you have any trouble. I hope the revised 
documentation explains any questions you may have, and as usual, 
test/runtests.jl can be useful to figure out how things are working. Once 
the package is registered and tagged, I will be more conscientious 
regarding breaking changes.

Enjoy,
Andrew Keller

On Friday, February 12, 2016 at 12:23:22 PM UTC-8, Andrew Keller wrote:
>
> I'm happy to share a package I wrote for using physical units in Julia, 
> Unitful.jl <https://www.github.com/ajkeller34/Unitful.jl>. Much credit 
> and gratitude is due to Keno Fischer for the SIUnits.jl 
> <https://www.github.com/keno/SIUnits.jl> package which served as my 
> inspiration. This is a work in progress, but I think perhaps a serviceable 
> one depending on what you're doing. 
>
> Like SIUnits.jl, this package encodes units in the type signature to avoid 
> run-time performance penalties. From there, the implementations diverge. 
> The package is targeted to Julia 0.5 / master, as there are some 
> limitations with how promote_op is used in Julia 0.4 (#13803) 
> <https://github.com/JuliaLang/julia/pull/13803>. I decided it wasn't 
> worth targeting 0.4 if the behavior would be inconsistent. 
>
> Some highlights include:
>
>- Non-SI units are treated on the same footing as SI units, with only 
>a few exceptions (unit conversion method). Use whatever weird units 
>you want.
>- Support for units like micron / (meter Kelvin), where some of the 
>units could cancel out but you don't necessarily want them to.
>- Support for LinSpace and other Range types. Probably there are still 
>some glitches to be found, though.
>- Support for rational exponents of units.
>- Some tests (see these for usage examples).
>
> Please see the documentation for a comprehensive discussion, including 
> issues / to do list, as well as how to add your own units, etc.
> Comments and feedback are welcome.
>
> Best,
> Andrew Keller
>


Re: [julia-users] Re: Juila vs Mathematica (Wolfram language): high-level features

2016-09-03 Thread Andrew Dabrowski
But what about nested pattern matching, or destructuring, isn't that much 
easier in Mathematica than Julia?  For example defining a function of two lists 
by
f[{w_,x_}, {y_,z_}]:=x y/(w+z).

I remember reading the Julia manifesto a few years ago, where the stated goal 
was to create a single computing language that would replace Fortran, scipy, 
Mathematica, Matlab, etc. simultaneously.  I thought at the time that it 
sounded nuts. 

Can we all agree now that it was, in fact, nuts? 

[julia-users] Re: @threads all vs. @parallel ???

2016-08-30 Thread Andrew
I have also been wondering this. I tried @threads yesterday and it got me 
around a 4-fold speedup on a loop which applied a function to each element 
in an array, and I conveniently didn't need to bother using SharedArrays as 
I would with @parallel.

On Tuesday, August 30, 2016 at 7:20:36 PM UTC-4, digxx wrote:
>
> Sorry if there is already some information on this though I didnt find 
> it...
> So: What is the difference between these?
> I have used @parallel so far for parallel loops but recently saw this 
> @threads all in some video and I was wondering what the difference is?
> Could anyone elaborate or give me a link with some info?
> Thanks digxx
>


Re: [julia-users] Performant methods for enclosing parameters?

2016-08-23 Thread Andrew
I tried moving k outside.
julia> outside_k(u::Float64,t::Float64,α) = α*u
outside_k (generic function with 1 method)

julia> function test(α, k)
   G = (u,t) -> k(u,t,1.01)
   G2 = (u,t)->k(u,t,α)
   const β = 1.01
   G3 = (u,t)->k(u,t,β)

   @code_llvm G(1., 2.)
   @code_llvm G2(1., 2.)
   @code_llvm G3(1., 2.)
   end
test (generic function with 1 method)

julia> test(2., outside_k)

define double @"julia_#1_69882"(double, double) #0 {
top:
  %2 = fmul double %0, 1.01e+00
  ret double %2
}

define double @"julia_#2_69884"(%"##2#5"*, double, double) #0 {
top:
  %3 = getelementptr inbounds %"##2#5", %"##2#5"* %0, i64 0, i32 0
  %4 = load double, double* %3, align 8
  %5 = fmul double %4, %1
  ret double %5
}

define double @"julia_#3_69886"(%"##3#6"*, double, double) #0 {
top:
  %3 = getelementptr inbounds %"##3#6", %"##3#6"* %0, i64 0, i32 1
  %4 = load double, double* %3, align 8
  %5 = fmul double %4, %1
  ret double %5
}



I think the problem with your original code is that you haven't defined α, 
so it looks like a global variable. 

On Tuesday, August 23, 2016 at 8:14:00 PM UTC-4, Chris Rackauckas wrote:
>
> In my scenarios, the function k is given by the user. So this method won't 
> work. If you move that definition of k outside of test() and pass it into 
> your test, you'll see that the LLVM code explodes (at least it does for 
> me). The issue is defining a closure on a function which was defined 
> outside of the current scope.
>
>
> BTW, what exactly is "%3 = getelementptr inbounds %"##30#34", %"##30#34"* 
> %0, i64 0, i32 1". How harmful is it to performance (I just see inbounds 
> and an extra line and want to get rid of it, but if it's harmless then 
> there are two solutions in here).
>
>  
> On Tuesday, August 23, 2016 at 5:04:21 PM UTC-7, Andrew wrote:
>>
>> I'm pretty confused about what you're trying to accomplish beyond 
>> standard closures. What is your ParameterHolder type for?
>>
>> I rewrote your first example wrapping everything in a function. Is this 
>> doing what you want it to? The LLVM looks fine.
>> function test(α)
>> k(u::Float64,t::Float64,α) = α*u
>> G = (u,t) -> k(u,t,1.01)
>> G2 = (u,t)->k(u,t,α)
>> const β = 1.01
>> G3 = (u,t)->k(u,t,β)
>>
>> @code_llvm G(1., 2.)
>> @code_llvm G2(1., 2.)
>> @code_llvm G3(1., 2.)
>> end
>>
>> julia> test(2.)
>>
>> define double @"julia_#28_70100"(double, double) #0 {
>> top:
>>   %2 = fmul double %0, 1.01e+00
>>   ret double %2
>> }
>>
>> define double @"julia_#29_70102"(%"##29#33"*, double, double) #0 {
>> top:
>>   %3 = getelementptr inbounds %"##29#33", %"##29#33"* %0, i64 0, i32 0
>>   %4 = load double, double* %3, align 8
>>   %5 = fmul double %4, %1
>>   ret double %5
>> }
>>
>> define double @"julia_#30_70104"(%"##30#34"*, double, double) #0 {
>> top:
>>   %3 = getelementptr inbounds %"##30#34", %"##30#34"* %0, i64 0, i32 1
>>   %4 = load double, double* %3, align 8
>>   %5 = fmul double %4, %1
>>   ret double %5
>> }
>>
>>
>>
>>
>> On Tuesday, August 23, 2016 at 6:00:37 PM UTC-4, Chris Rackauckas wrote:
>>>
>>> Yes, I am looking for a closure which has the least overhead possible 
>>> (this was all in v0.5). For example, for re-ordering parameters: g = 
>>> (du,u,t) -> f(t,u,du), or for enclosing parameter values as above.
>>>
>>> I'll give the Val method a try and see whether the compile time is 
>>> significant (it will probably be an option). 
>>>
>>> Even a solution which is like translator1/2 where the inbounds check is 
>>> able to be turned off would likely be performant enough to not be 
>>> noticeably different.
>>>
>>> On Tuesday, August 23, 2016 at 1:27:25 PM UTC-7, Erik Schnetter wrote:
>>>>
>>>> Chris
>>>>
>>>> I don't quite understand what you mean. Are you looking for a closure / 
>>>> lambda expression?
>>>>
>>>> ```Julia
>>>> function myfunc(x0, x1, alpha)
>>>> f(x) = alpha * x
>>>> ODE.solve(f, x0, x1)
>>>> end
>>>> ```
>>>>
>>>> Or is it important for your that your function `f` is optimized, i.e. 
>>>> you want to re-run the code generator (expensive!) every t

Re: [julia-users] Performant methods for enclosing parameters?

2016-08-23 Thread Andrew
I'm pretty confused about what you're trying to accomplish beyond standard 
closures. What is your ParameterHolder type for?

I rewrote your first example wrapping everything in a function. Is this 
doing what you want it to? The LLVM looks fine.
function test(α)
k(u::Float64,t::Float64,α) = α*u
G = (u,t) -> k(u,t,1.01)
G2 = (u,t)->k(u,t,α)
const β = 1.01
G3 = (u,t)->k(u,t,β)

@code_llvm G(1., 2.)
@code_llvm G2(1., 2.)
@code_llvm G3(1., 2.)
end

julia> test(2.)

define double @"julia_#28_70100"(double, double) #0 {
top:
  %2 = fmul double %0, 1.01e+00
  ret double %2
}

define double @"julia_#29_70102"(%"##29#33"*, double, double) #0 {
top:
  %3 = getelementptr inbounds %"##29#33", %"##29#33"* %0, i64 0, i32 0
  %4 = load double, double* %3, align 8
  %5 = fmul double %4, %1
  ret double %5
}

define double @"julia_#30_70104"(%"##30#34"*, double, double) #0 {
top:
  %3 = getelementptr inbounds %"##30#34", %"##30#34"* %0, i64 0, i32 1
  %4 = load double, double* %3, align 8
  %5 = fmul double %4, %1
  ret double %5
}




On Tuesday, August 23, 2016 at 6:00:37 PM UTC-4, Chris Rackauckas wrote:
>
> Yes, I am looking for a closure which has the least overhead possible 
> (this was all in v0.5). For example, for re-ordering parameters: g = 
> (du,u,t) -> f(t,u,du), or for enclosing parameter values as above.
>
> I'll give the Val method a try and see whether the compile time is 
> significant (it will probably be an option). 
>
> Even a solution which is like translator1/2 where the inbounds check is 
> able to be turned off would likely be performant enough to not be 
> noticeably different.
>
> On Tuesday, August 23, 2016 at 1:27:25 PM UTC-7, Erik Schnetter wrote:
>>
>> Chris
>>
>> I don't quite understand what you mean. Are you looking for a closure / 
>> lambda expression?
>>
>> ```Julia
>> function myfunc(x0, x1, alpha)
>> f(x) = alpha * x
>> ODE.solve(f, x0, x1)
>> end
>> ```
>>
>> Or is it important for your that your function `f` is optimized, i.e. you 
>> want to re-run the code generator (expensive!) every time there's a new 
>> value for `alpha`? For this, you can use `Val` (but please benchmark first):
>>
>> ```Julia
>> function f{alpha}(x, ::Type{Val{alpha}})
>> alpha * x
>> end
>>
>> function myfunc(x0, x1, alpha)
>> f1(x) = f(x, Val{alpha})
>> ODE.solve(f, x0, x1)
>> end
>> ```
>>
>> This will have a marginally faster evaluation of `f`, at the cost of 
>> compiling a separate function for each value of `alpha`.
>>
>> Since these examples use closures, they will be much more efficient in 
>> Julia 0.5 than in 0.4.
>>
>> -erik
>>
>>
>>
>>
>> On Tue, Aug 23, 2016 at 2:23 PM, Chris Rackauckas  
>> wrote:
>>
>>> Note: This looks long, but really just has a lot of LLVM IR!
>>>
>>> I have been digging into the issue recently of the best way to enclose 
>>> parameters with a function 
>>> . 
>>> This is an issue that comes up a lot with scientific codes, and so I was 
>>> hoping to try and get it right. However, the results of my experiments 
>>> aren't looking too good, and so I was hoping to find out whether I am 
>>> running into some bug or simply just not finding the optimal solution.
>>>
>>> The example is as follows (with LLVM IR included to show how exactly 
>>> everything is compiling). Say the user wants we to do a bunch of things 
>>> with the function f(u,t)=α*u where α is some parameter. They don't 
>>> necessarily want to replace it as a constant since they may change it 
>>> around a bit, but every time this function is given to me, I can treat it 
>>> as a constant. If they were willing to treat it as a constant, then they 
>>> could take this function:
>>>
>>> k(u::Float64,t::Float64,α) = α*u
>>> println("Standard k definition")
>>> @code_llvm k(1.0,2.0,1.01)
>>>
>>> #Result
>>>
>>> define double @julia_k_70163(double, double, double) #0 {
>>> top:
>>>   %3 = fmul double %0, %2
>>>   ret double %3
>>> }
>>>
>>>
>>> and enclose the constant:
>>>
>>> G = (u,t) -> k(u,t,1.01)
>>> G2 = (u,t)->k(u,t,α)
>>> println("Top level inlined k")
>>> @code_llvm G(1.0,2.0)
>>> println("Top level not inlined k")
>>> @code_llvm G2(1.0,2.0)
>>> const β = 1.01
>>> G3 = (u,t)->k(u,t,β)
>>> println("Top level not inlined but const k")
>>> @code_llvm G3(1.0,2.0)
>>>
>>> #Results
>>>
>>> Top level inlined k
>>>
>>> define double @"julia_#159_70165"(double, double) #0 {
>>> top:
>>>   %2 = fmul double %0, 1.01e+00
>>>   ret double %2
>>> }
>>>
>>> Top level not inlined k
>>>
>>> define %jl_value_t* @"julia_#161_70167"(double, double) #0 {
>>> top:
>>>   %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #2
>>>   %ptls_i8 = getelementptr i8, i8* %thread_ptr, i64 -2672
>>>   %ptls = bitcast i8* %ptls_i8 to %jl_value_t***
>>>   %2 = alloca [5 x %jl_value_t*], align 8
>>>   %.sub = getelementptr inbounds [5 x %jl_value_t*], [5 x %jl_value_t*]* 
>>> %2, i64 0, i64 0
>>>   %3 = getelementptr [5 x %jl_value_t*], [5 x %jl

[julia-users] Re: ANN: Optim v0.6

2016-08-14 Thread Andrew
I also couldn't find the documentation for a minute. I think it's because 
it's attached to the tests pass indicators, which I glossed over due to 
lack of interest.

On Sunday, August 14, 2016 at 4:22:26 AM UTC-4, Kristoffer Carlsson wrote:
>
> What did you expect them to be? They are right under a big table header 
> saying "Documentation". They have the same style as the other badges which 
> are also clickable and have been for years. I think it is worth seeing if 
> more people can't find the documentation before doing any changes. 



[julia-users] Re: Juno IDE Console Error: supertype not defined

2016-08-12 Thread Andrew Rushby
I seem to have fixed the problem with Pkg.build("Media.jl"). Thanks for 
your help anyway!


[julia-users] Re: Juno IDE Console Error: supertype not defined

2016-08-12 Thread Andrew Rushby
Thanks for the reply. Yes, I saw that here 
<http://discuss.junolab.org/t/atom-juno-windows-10-supertype-not-defined-error/787>
 but 
it doesn't seem to have changed much for me. Here's the result of 
Pkg.status():

2 required packages:
- Atom 0.4.4
- PyPlot 2.1.0
31 additional packages:
- BinDeps 0.4.2
- Blink 0.3.4
- CodeTools 0.3.1
- Codecs 0.2.0
- ColorTypes 0.2.5
- Colors 0.6.6
- Compat 0.8.6
- Conda 0.0.0- non-repo (unregistered)
- Dates 0.4.4
- FactCheck 0.4.3
- FixedPointNumbers 0.1.4
- Hiccup 0.0.3
- HttpCommon 0.2.6
- HttpParser 0.1.1
- HttpServer 0.1.6
- JSON 0.6.0
- JuliaParser 0.6.4
- LNR 0.0.2
- LaTeXStrings 0.2.0
- Lazy 0.11.0
- MacroTools 0.3.2
- MbedTLS 0.2.6
- Media 0.2.1
- Mustache 0.0.15
- Mux 0.2.1
- PyCall 1.0.3
- Reexport 0.0.3
- Requires 0.2.2
- SHA 0.2.0
- URIParser 0.1.6
- WebSockets 0.2.0

Followed by:

UndefVarError: supertype not defined
 in distance at C:\Users\Andrew\.julia\v0.4\Media\src\system.jl:8
 in nearest at C:\Users\Andrew\.julia\v0.4\Media\src\system.jl:13
 in anonymous at C:\Users\Andrew\.julia\v0.4\Media\src\system.jl:17
 in _mapreduce at reduce.jl:145
 in mapreduce at reduce.jl:159
 in nearest at C:\Users\Andrew\.julia\v0.4\Media\src\system.jl:16
 in getdisplay at C:\Users\Andrew\.julia\v0.4\Media\src\system.jl:114
 in render at C:\Users\Andrew\.julia\v0.4\Media\src\system.jl:166
 [inlined code] from C:\Users\Andrew\.julia\v0.4\Atom\src\eval.jl:41
 in anonymous at C:\Users\Andrew\.julia\v0.4\Atom\src\eval.jl:108
 in withpath at C:\Users\Andrew\.julia\v0.4\Requires\src\require.jl:37
 in withpath at C:\Users\Andrew\.julia\v0.4\Atom\src\eval.jl:53
 [inlined code] from C:\Users\Andrew\.julia\v0.4\Atom\src\eval.jl:107
 in anonymous at task.jl:58

I guess I might be missing a package? 

Andrew


On Friday, 12 August 2016 14:31:23 UTC-7, Tony Kelman wrote:
>
> I believe there was an update of Media.jl that should have fixed this, 
> what does Pkg.status() say?
>
> On Friday, August 12, 2016 at 1:40:11 PM UTC-7, Andrew Rushby wrote:
>>
>> I'm running into the same error message. I've tried reinstalling as well 
>> as Pkg.update() as I've seen suggested elsewhere, but no luck. As I'm 
>> rather new to this, is there anything else I could try?
>>
>> On Thursday, 11 August 2016 18:25:57 UTC-7, Tony Kelman wrote:
>>>
>>> This is what happens when packages Juno depends on have no unit tests. 
>>> It looks like it's been fixed on master of Media.jl which should get tagged 
>>> shortly.
>>>
>>> On Thursday, August 11, 2016 at 3:59:27 PM UTC-7, Kaela Martin wrote:
>>>>
>>>> When running Juno IDE with Atom, I get the following error message:
>>>>
>>>> UndefVarError: supertype not defined
>>>>  in distance at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:8
>>>>  in nearest at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:13
>>>>  in anonymous at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:17
>>>>  in _mapreduce at reduce.jl:145
>>>>  in mapreduce at reduce.jl:159
>>>>  in nearest at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:16
>>>>  in getdisplay at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:114
>>>>  in render at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:166
>>>>  [inlined code] from C:\Users\kaelam\.julia\v0.4\Atom\src\eval.jl:41
>>>>  in anonymous at C:\Users\kaelam\.julia\v0.4\Atom\src\eval.jl:108
>>>>  in withpath at C:\Users\kaelam\.julia\v0.4\Requires\src\require.jl:37
>>>>  in withpath at C:\Users\kaelam\.julia\v0.4\Atom\src\eval.jl:53
>>>>  [inlined code] from C:\Users\kaelam\.julia\v0.4\Atom\src\eval.jl:107
>>>>  in anonymous at task.jl:58
>>>>
>>>> I've tried uninstalling the Media package, deleting the file manually, 
>>>> updating the packages, and still get the same error when using Atom. The 
>>>> Julia terminal works fine, but the console in Atom fails for any input 
>>>> even 
>>>> 1+1.
>>>>
>>>> Unistalling and re-installing IDE results in the same error. Any 
>>>> suggestions?
>>>>
>>>

[julia-users] Re: Juno IDE Console Error: supertype not defined

2016-08-12 Thread Andrew Rushby
I'm running into the same error message. I've tried reinstalling as well as 
Pkg.update() as I've seen suggested elsewhere, but no luck. As I'm rather 
new to this, is there anything else I could try?

On Thursday, 11 August 2016 18:25:57 UTC-7, Tony Kelman wrote:
>
> This is what happens when packages Juno depends on have no unit tests. It 
> looks like it's been fixed on master of Media.jl which should get tagged 
> shortly.
>
> On Thursday, August 11, 2016 at 3:59:27 PM UTC-7, Kaela Martin wrote:
>>
>> When running Juno IDE with Atom, I get the following error message:
>>
>> UndefVarError: supertype not defined
>>  in distance at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:8
>>  in nearest at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:13
>>  in anonymous at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:17
>>  in _mapreduce at reduce.jl:145
>>  in mapreduce at reduce.jl:159
>>  in nearest at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:16
>>  in getdisplay at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:114
>>  in render at C:\Users\kaelam\.julia\v0.4\Media\src\system.jl:166
>>  [inlined code] from C:\Users\kaelam\.julia\v0.4\Atom\src\eval.jl:41
>>  in anonymous at C:\Users\kaelam\.julia\v0.4\Atom\src\eval.jl:108
>>  in withpath at C:\Users\kaelam\.julia\v0.4\Requires\src\require.jl:37
>>  in withpath at C:\Users\kaelam\.julia\v0.4\Atom\src\eval.jl:53
>>  [inlined code] from C:\Users\kaelam\.julia\v0.4\Atom\src\eval.jl:107
>>  in anonymous at task.jl:58
>>
>> I've tried uninstalling the Media package, deleting the file manually, 
>> updating the packages, and still get the same error when using Atom. The 
>> Julia terminal works fine, but the console in Atom fails for any input even 
>> 1+1.
>>
>> Unistalling and re-installing IDE results in the same error. Any 
>> suggestions?
>>
>

[julia-users] Re: Does a wrapper for gmtime exist

2016-07-28 Thread Andrew Gibb
Thanks both for the suggestions. 

On Wednesday, 27 July 2016 19:54:38 UTC+1, Jeffrey Sarnoff wrote:
>
> Andy,
>
> You are welcome to use https://github.com/J-Sarnoff/UTime.jl
>
> Pkg.clone("https://github.com/J-Sarnoff/UTime.jl";)
> using UTime
>
> localtime()
> 2016-07-27T14:41:37.497-04:00
>
> ut()
> 2016-07-27T18:41:42Z
>
> gmt() # gmt aliases ut
> 2016-07-27T18:41:45Z
>
> alocaltime = localtime()
> 2016-07-27T14:42:04.282-04:00
> ut(alocaltime)
> 2016-07-27T18:42:04.282Z
>
> auniversaltime = ut()
> 2016-07-27T18:42:24Z
> localtime(auniversaltime)
> 2016-07-27T14:42:24-04:00
>
> julia> g=gmt(); g
> 2016-07-27T18:50:48Z
>
> julia> year(g),month(g),day(g),hour(g),minute(g),second(g)
> (2016,7,27,18,50,48)
>
>
>
> There is more that one may do using UTime, take a look at the README 
> examples.
>
>
> It has been some time since I revisited it, so please ask if you have any 
> questions, 
> and let me know if there is something that is not now working as it had.
> I just ran the examples above in v0.5-.
>
> Regards, 
> Jeffrey
>
> On Wednesday, July 27, 2016 at 9:32:56 AM UTC-4, Andrew Gibb wrote:
>>
>> Hi,
>>
>> I'm writing a module for dealing with video timing. The C++ code I'm 
>> using as a basis uses gmtime to convert seconds into y:m:d.etc. Does anyone 
>> know if this functionality is exposed in Julia anywhere?
>>
>> I realise I could handle this myself with ccall, I just wanted to check 
>> to see if I could avoid duplicating effort.
>>
>> Thanks
>>
>> Andy
>>
>

[julia-users] Re: segfault when building master on linux

2016-07-27 Thread andrew cooke

well, checking out and building completely from zero fixed it, whatever it 
was.

On Wednesday, 27 July 2016 20:40:44 UTC-4, andrew cooke wrote:
>
>
> last log is
>
> commit bcc2121dc31647fc0999d09c10df91d35f216838
> Merge: d0a378d 276c52e
> Author: Jeff Bezanson 
> Date:   Wed Jul 27 14:28:16 2016 -0600
> Merge pull request #17638 from JuliaLang/jb/cleanup
> formatting fixes and code cleanup
>
> and this is the error (after make clean, then make, and waiting some time):
>
> Copying in usr/share/doc/julia/examples
> JULIA usr/lib/julia/inference.ji
> /bin/sh: line 1: 17651 Segmentation fault  /home/andrew/pkg/julia-0.5/
> usr/bin/julia -C native --output-ji /home/andrew/pkg/julia-0.5/usr/lib/
> julia/inference.ji --startup-file=no coreimg.jl
> Makefile:215: recipe for target 
> '/home/andrew/pkg/julia-0.5/usr/lib/julia/inference.ji' failed
> make[1]: *** [/home/andrew/pkg/julia-0.5/usr/lib/julia/inference.ji] Error 
> 139
> Makefile:96: recipe for target 'julia-inference' failed
> make: *** [julia-inference] Error 2
>
>
> i can't find anything here or on the dev list recently.  anyone have any 
> hints?
>
> (i want julia-0.5 to test a patch)
>
> thanks,
> andrew
>
>

[julia-users] segfault when building master on linux

2016-07-27 Thread andrew cooke

last log is

commit bcc2121dc31647fc0999d09c10df91d35f216838
Merge: d0a378d 276c52e
Author: Jeff Bezanson 
Date:   Wed Jul 27 14:28:16 2016 -0600
Merge pull request #17638 from JuliaLang/jb/cleanup
formatting fixes and code cleanup

and this is the error (after make clean, then make, and waiting some time):

Copying in usr/share/doc/julia/examples
JULIA usr/lib/julia/inference.ji
/bin/sh: line 1: 17651 Segmentation fault  /home/andrew/pkg/julia-0.5/
usr/bin/julia -C native --output-ji /home/andrew/pkg/julia-0.5/usr/lib/julia
/inference.ji --startup-file=no coreimg.jl
Makefile:215: recipe for target 
'/home/andrew/pkg/julia-0.5/usr/lib/julia/inference.ji' failed
make[1]: *** [/home/andrew/pkg/julia-0.5/usr/lib/julia/inference.ji] Error 
139
Makefile:96: recipe for target 'julia-inference' failed
make: *** [julia-inference] Error 2


i can't find anything here or on the dev list recently.  anyone have any 
hints?

(i want julia-0.5 to test a patch)

thanks,
andrew



[julia-users] Does a wrapper for gmtime exist

2016-07-27 Thread Andrew Gibb
Hi,

I'm writing a module for dealing with video timing. The C++ code I'm using 
as a basis uses gmtime to convert seconds into y:m:d.etc. Does anyone know 
if this functionality is exposed in Julia anywhere?

I realise I could handle this myself with ccall, I just wanted to check to 
see if I could avoid duplicating effort.

Thanks

Andy


[julia-users] Re: Segfaults in SharedArrays

2016-06-25 Thread Andrew
I didn't see the SharedArray fix on the 0.4.6 commit log.

On Saturday, June 25, 2016 at 2:01:31 PM UTC-4, Nils Gudat wrote:
>
> No luck unfortunately on 0.4.6 either, so it seems the SharedArray fix 
> (assuming it made it into 0.4.6) didn't help.
> Any other ideas? How does one figure out the source of a segfault?
>


[julia-users] Re: Tips for optimizing this short code snippet

2016-06-18 Thread Andrew
I don't think the anonymous y -> 1/g(y) and the nested function invg(y) = 
1//g(y) are any different in terms of performance. They both form closures 
over local variables and they should both be faster under 0.5.

I did run your code under 0.5dev and it was slower, but I think they're 
still fixing a variety of performance issues. I don't know much about it 
though.  

On Saturday, June 18, 2016 at 10:25:11 AM UTC-4, Marius Millea wrote:
>
> Ahh sorry, forget the 2x slower thing, I had accidentally changed 
> something else. Both the anonymous y->1/g(y) and invg(y) give essentially 
> the exact same run time. 
>
> There are a number of 1's and 0's, but AFAICT they shouldn't cause any 
> type instabilities, if the input variable y or x is a Float64, the output 
> should always be Float64 also. In any case I did check switching them to 1. 
> and 0.'s but that also has no effect. 
>
> Marius
>
>
>
>
> On Saturday, June 18, 2016 at 4:08:59 PM UTC+2, Eric Forgy wrote:
>>
>> Try code_warntype. I'm guessing you have some type instabilities, e.g. I 
>> see some 1's and 0's, where it might be better to use 1.0 and 0.0. Not sure 
>> :)
>>
>> On Saturday, June 18, 2016 at 9:48:29 PM UTC+8, Marius Millea wrote:
>>>
>>> Thanks, yea, I had read that too and at some point checked if it 
>>> mattered and it didn't seem to which wasn't entirely surprising since its 
>>> on the outer loop. 
>>>
>>> But I just checked again given your comment and on Julia 0.4.5 it seems 
>>> to actually be 2x slower if I switch it to this:
>>>
>>> function f(x)
>>> invg(y) = 1/g(y)
>>> quadgk(invg,0,x)[1]  # <=== outer integral
>>> end
>>>
>>> Odd...
>>>
>>>
>>> On Saturday, June 18, 2016 at 3:41:37 PM UTC+2, Eric Forgy wrote:

 Which version of Julia are you using? One thing that stands out is the 
 anonymous function y->1/g(y) being passed as an argument to quadgk. I'm 
 not 
 an expert, but I've heard this is slow in v0.4 and below, but should be 
 fast in v0.5. Just a though.

 On Saturday, June 18, 2016 at 8:53:57 PM UTC+8, Marius Millea wrote:
>
> Hi all, I'm sort of just starting out with Julia, I'm trying to get 
> gauge of how fast I can make some code of which I have Cython and Fortran 
> versions to see if I should continue down the path of converting more or 
> my 
> stuff to Julia (which in general I'd very much like to, if I can get it 
> fast enough). I thought maybe I'd post the code in question here to see 
> if 
> I could get any tips. I've stripped down the original thing to what I 
> think 
> are the important parts, a nested integration with an inner function 
> closure and some global variables. 
>
> module test
>
> const a = 1.
>
> function f(x)
> quadgk(y->1/g(y),0,x)[1]  # <=== outer integral
> end
>
> function g(y)
> integrand(x) = x^2*sqrt(x^2*y^2+a)/(exp(sqrt(x^2+y^2))+a)
> quadgk(integrand,0,Inf)[1]   # <=== inner integral
> end
>
> end
>
>
> > @timeit test.f(1.)
> 100 loops, best of 3: 3.10 ms per loop
>
>
>
>
> Does anyone have any tips that squeezes a little more out of this 
> code? I have run ProfileView on it, and although I'm not sure I fully 
> understand how to read its output, I think it's saying the majority of 
> runtime is spent in quadgk itself. So perhaps I should look into using a 
> different integration library? 
>
> Thanks for any help. 
>
>

[julia-users] Re: ArrayFire.jl - GPU Programming in Julia

2016-06-11 Thread Andrew
I have very little knowledge of GPU computing. Is it possible to use the 
GPU for not just simple matrix operations, but for arbitrary functions? I 
think I'm looking here for a GPU pmap(f, A), where f is a function I've 
defined and A is some array with a few thousand elements.

On Saturday, June 11, 2016 at 7:49:30 AM UTC-4, Ranjan Anantharaman wrote:
>
> Hey, I'm also giving a talk on ArrayFire.jl at JuliaCon 2016 too: 
> http://juliacon.org/abstracts.html#ArrayFire. See how ArrayFire.jl is 
> useful in accelerating your code!
>
> Thanks,
> Ranjan
>
> On Friday, 10 June 2016 10:38:42 UTC+5:30, ran...@juliacomputing.com 
> wrote:
>>
>> Hello, 
>>
>> We are pleased to announce ArrayFire.jl, a library for GPU and 
>> heterogeneous computing in Julia: (
>> https://github.com/JuliaComputing/ArrayFire.jl). We look forward to your 
>> feedback and your contributions as well! 
>>
>> For more information, check out Julia Computing's latest blog post: 
>> http://juliacomputing.com/blog/2016/06/09/julia-gpu.html
>>
>> Thanks,
>> Ranjan
>> Julia Computing, Inc. 
>>
>

[julia-users] Re: gigantic memory allocation

2016-06-09 Thread Andrew
This is somewhat better:


function ModGS2(A::Array{Float64,2})
 delta = 1e-16
 (m,n) = size(A)
 for j = 1: n
 q = slice(A, :,j)
 q[:] = q / (norm(q)+delta)
 A[:,j] = q
 for k = j+1: n
  a = slice(A, :,k)
  r = dot(q, a)
 for i =1:m; A[i,k] = a[i] - r*q[i]; end
 end
 end
 return nothing
   end
function test()
A = rand(1000, 500);

B = copy(A);

 @time ModGS(A);
 @time ModGS2(B);
 @show any(A .!= B)
end

julia> test()
  0.585496 seconds (875.25 k allocations: 2.831 GB, 14.72% gc time)
  0.249571 seconds (376.25 k allocations: 17.231 MB)
any(A .!= B) = false
false









On Thursday, June 9, 2016 at 6:28:27 PM UTC-4, wg...@ualberta.ca wrote:
>
> Hi Julia Users
>
> I write a small function to do Gram-Schmidt orthogonalization:
>
> function ModGS(A::Array{Float64,2})
>   delta = 1e-16
>   (m,n) = size(A)
>   for j = 1: n
>   q = A[:,j]
>   q = q / (norm(q)+delta)
>   A[:,j] = q
>   for k = j+1: n
>a = A[:,k]
>r = dot(q, a)
>A[:,k] = a - r*q
>   end
>   end
>   return nothing
> end
>
> A = rand(1000, 500);
> @time ModGS(A);
>
> 0.696324 seconds (875.25 k allocations: 2.831 GB, 15.78% gc time)
>
>
> it cause huge amount of memory allocation, Does anybody know the reason? How 
> to improve it?
>
>
> I also use memory track tool, this is the output:
>
> - function ModGS(A::Array{Float64, 2})
>
>  65440991 delta = 1e-16
>
> 0 (m,n) = size(A)
>
> 0 for j = 1: n
>
>   4056000 q = A[:,j]
>
>   404 q = q / (norm(q)+delta)
>
> 0 A[:,j] = q
>
> 0 for k = j+1: n
>
> 1011972000a = A[:,k]
>
> 0 r = dot(q, a)
>
> 2019952000A[:,k] = a - r*q
>
> - end
>
> - end
>
> 0 return nothing
>
> - end
>
> - 
>
> - 
>
> - A = randn(1000, 500);
>
> - ModGS(A)
>
> - 
>
>
> Thanks
>
>
>
>
>

Re: [julia-users] Re: Double free or corruption (out)

2016-06-02 Thread Andrew
Have you tried running the code without using parallel? I have been getting 
similar errors in my economics code. It segfaults sometimes, though not 
always, after a seemingly random amount of time, sometimes an hour or so, 
sometimes less. However, I don't recall it having ever occurred in the 
times I've run it without parallel. I'm using SharedArrays like you. I've 
seen this occur on both 0.4.1 and 0.4.5.

The error isn't too serious for me because I periodically save the 
optimization state to disk, so I can just restart.

I also can't remember this ever occurring on my own (Linux) computer. It's 
happened on a (Linux) cluster with many cores.  


On Thursday, June 2, 2016 at 3:45:24 AM UTC-4, Nils Gudat wrote:
>
> Fair enough. Does anyone have any clues as to how I would go about 
> investigating this? As has been said before, the stacktraces aren't very 
> helpful for segfaults, so how do I figure out what's going wrong here?
>


Re: [julia-users] Julia text editor on iPad?

2016-05-15 Thread Andrew Gibb
Is there a way to do shift-enter to execute a cell using the iPad software 
keyboard? Does shift enter just work on the hardware keyboard?

Re: [julia-users] Forming run() calls programatically

2016-05-13 Thread Andrew Gibb
Those methods both work, and help me understand metaprogramming a little.

Thanks to you both.


[julia-users] Forming run() calls programatically

2016-05-13 Thread Andrew Gibb
I'm trying to use metaprogramming to create two functions. Each of which 
includes a similar, long, call to run(). The calls are not quite identical. 
Some flags have different arguments, and some are only present on one call. 
I can't work out how to get expression interpolation to happen within 
command interpolation. For example:

for (fn, arg) = ((:toucha, "a"), (:touchb, "b"))
#@eval ($fn)() = run(`touch $arg`)
@eval ($fn)() = run(`touch $(@eval ($arg))`)
end

In the first of these cases, the result for toucha is
toucha() = run(`touch $arg`)

and similarly for the second result, the parsing character $ is ignored by 
the eval. Is there a way to get this to instead return
toucha() = run(`touch a`)

?


[julia-users] Re: performance of two different array allocations

2016-04-21 Thread Andrew
I don't get it, but it doesn't like something about concatenating the 
number and the array. If you convert the 1 to an array first, it works. I 
thought it was because 1 is an integer and you're joining it with an array 
of float 0's, but using 1. doesn't work either.

function version1(N)
 b = [[1]; zeros(N-1)]
 println(typeof(b))
 for k = 1:N
   for j = 1:N
 b[j] += k
   end
 end
   end

This has no type issues.

On Thursday, April 21, 2016 at 2:08:52 PM UTC-4, Jeremy Kozdon wrote:
>
> Thanks. I figured it was something along these lines. I'd forgotten about 
> the "@code_warntype" macro
>  
> On Thursday, April 21, 2016 at 10:45:17 AM UTC-7, Daniel O'Malley wrote:
>>
>> It looks like the type inference is failing in version1. If you do 
>> "@code_warntype version1(1000)", it shows that it is inferring the type of 
>> b as Any.
>>
>> On Thursday, April 21, 2016 at 11:32:27 AM UTC-6, Jeremy Kozdon wrote:
>>>
>>> In a class I'm teaching the students are using Julia and I couldn't for 
>>> the life of me figure out why one of my students codes was allocating a lot 
>>> of memory.
>>>
>>> I finally paired it down the following example that I don't understand:
>>>
>>> function version1(N)
>>>   b = [1;zeros(N-1)]
>>>   println(typeof(b))
>>>   for k = 1:N
>>> for j = 1:N
>>>   b[j] += k
>>> end
>>>   end
>>> end
>>>
>>>
>>> function version2(N)
>>>   b = zeros(N)
>>>   b[1] = 1
>>>   println(typeof(b))
>>>   for k = 1:N
>>> for j = 1:N
>>>   b[j] += k
>>> end
>>>   end
>>> end
>>>
>>> N = 1000
>>> println("compiling..")
>>> @time version1(N)
>>> version2(N)
>>> println()
>>> println()
>>>
>>> println("Version 1")
>>> @time version1(N)
>>> println()
>>>
>>> println("Version 2")
>>> @time version2(N)
>>>
>>> The output of this (without the compiling output) in v0.4.5 is:
>>>
>>> Version 1
>>> Array{Float64,1}
>>>   0.092473 seconds (3.47 M allocations: 52.920 MB, 3.24% gc time)
>>>
>>> Version 2
>>> Array{Float64,1}
>>>   0.001195 seconds (27 allocations: 8.828 KB)
>>>
>>> Both version produce the same type for Array b, but in version1 every 
>>> time through the loop allocation happens and in the version2 the only 
>>> allocation is of the initial array.
>>>
>>> I've not run into this one before (because I would never do version1), 
>>> but as all of us that teach know students will always surprise you with 
>>> their approaches.
>>>
>>> Any help understanding what's going on would be appreciated.
>>>
>>

Re: [julia-users] When do constants get pre-computed within a function?

2016-04-21 Thread Andrew
Interesting. After reading that issue, I saw that sqrt() might get folded 
by LLVM. It looks like this is correct.

const front = 1/(2 * pi)^(1/2)
function nPDF(x::Number)
  front * exp(-0.5 * x^2)
end
function nPDF2(x::Number)
  1/sqrt(2 * pi) * exp(-0.5 * x^2)
end

Both these functions produce the same @code_llvm. I guess sqrt() is folded 
but x^(1/2) is not.


On Thursday, April 21, 2016 at 8:36:43 AM UTC-4, Yichao Yu wrote:
>
> On Thu, Apr 21, 2016 at 7:56 AM, Andrew > 
> wrote: 
> > I recently wrote some code which called a user defined function in a 
> tight 
> > inner loop, and I was trying to speed it up. I realized that I had some 
> > 2^(1/2) square roots in my function which appear to be being computed 
> every 
> > time, which surprised me. I've seen simple expressions like 2*2 and 2/3 
> get 
> > converted to fixed values in the @code_llvm, and I hadn't realized more 
> > complicated constants weren't. Here's my test code: 
> > 
> > const front = 1/(2 * pi)^(1/2) 
> > function nPDF(x::Number) 
> >   front * exp(-0.5 * x^2) 
> > end 
> > function nPDF2(x::Number) 
> >   1/(2 * pi)^(1/2) * exp(-0.5 * x^2) 
> > end 
> > 
> > function test1(N) 
> >   for i = 1:N 
> > nPDF(1/i) 
> >   end 
> > end 
> > 
> > function test2(N) 
> >   for i = 1:N 
> > nPDF2(1/i) 
> >   end 
> > end 
> > 
> > julia> @time test1(5) 
> >   1.943844 seconds (4 allocations: 160 bytes) 
> > julia> @time test2(5) 
> >   4.716345 seconds (4 allocations: 160 bytes) 
> > 
> > 
> > 
>
> https://github.com/JuliaLang/julia/issues/9942 
>
> > 
> > Pre-computing the square root term and saving it as a constant doubles 
> the 
> > speed of the function. In my real code, making these changes gave me 
> about a 
> > 25% speedup. 
> > 
> > 
>


[julia-users] When do constants get pre-computed within a function?

2016-04-21 Thread Andrew
I recently wrote some code which called a user defined function in a tight 
inner loop, and I was trying to speed it up. I realized that I had some 
2^(1/2) square roots in my function which appear to be being computed every 
time, which surprised me. I've seen simple expressions like 2*2 and 2/3 get 
converted to fixed values in the @code_llvm, and I hadn't realized more 
complicated constants weren't. Here's my test code:

const front = 1/(2 * pi)^(1/2)
function nPDF(x::Number)
  front * exp(-0.5 * x^2)
end
function nPDF2(x::Number)
  1/(2 * pi)^(1/2) * exp(-0.5 * x^2)
end

function test1(N)
  for i = 1:N
nPDF(1/i)
  end
end

function test2(N)
  for i = 1:N
nPDF2(1/i)
  end
end

julia> @time test1(5)
  1.943844 seconds (4 allocations: 160 bytes)
julia> @time test2(5)
  4.716345 seconds (4 allocations: 160 bytes)




Pre-computing the square root term and saving it as a constant doubles the 
speed of the function. In my real code, making these changes gave me about 
a 25% speedup.




Re: [julia-users] Re: Rounding to zero from positive or negative numbers results in positive or negative zero.

2016-04-20 Thread Andrew Gibb
On Wednesday, 20 April 2016 15:18:06 UTC+1, Stefan Karpinski wrote:
>
> IEEE has not made the programming language designer's life easy here.
>

Perhaps it's a subtle attempt to incentivise more designers of mathematical 
programming languages into IEEE standards committees?!

 

>
> On Wed, Apr 20, 2016 at 5:51 AM, Milan Bouchet-Valat  > wrote:
>
>> Le mardi 19 avril 2016 à 22:10 -0700, Jeffrey Sarnoff a écrit :
>> > Hi,
>> >
>> > You have discovered that IEEE standard floating point numbers have
>> > two distinct zeros: 0.0 and -0.0.  They compare `==` even though they
>> > are not `===`.  If you want to consider +0.0 and -0.0 to be the same,
>> > use `==` or `!=` not `===`  or `!==` when testing floating point
>> > values (the other comparisons <=, <, >=, > treat the two zeros as a
>> > single value).
>> There's actually an open issue about what to do with -0.0 and NaN in
>> Dicts: https://github.com/JuliaLang/julia/issues/9381
>>
>> It turns out it's very hard to find a good solution.
>>
>>
>> Regards
>>
>> > > Hello everyone!
>> > > I was wondering if the following behavior of round() has an special
>> > > purpouse:
>> > >
>> > > a = round(0.1)
>> > > 0.0
>> > >
>> > > b = round(-0.1)
>> > > -0.0
>> > >
>> > > a == b
>> > > true
>> > >
>> > > a === b
>> > > false
>> > >
>> > > bits(a)
>> > > ""
>> > >
>> > >
>> > > bits(b)
>> > > "1000"
>> > >
>> > > So the sign stays around...
>> > >
>> > > I am using this rounded numbers as keys in a dictionary and julia
>> > > can tell the difference. 
>> > >
>> > > For example, I expected something like this:
>> > > dict = [i => exp(i) for i in [a,b]]
>> > > Dict{Any,Any} with 1 entry:
>> > >  0.0 => 1.0
>> > >
>> > > but got this:
>> > > dict = [i => exp(i) for i in [a,b]]
>> > > Dict{Any,Any} with 2 entries:
>> > >   0.0  => 1.0
>> > >   -0.0 => 1.0
>> > >
>> > > It is not a big problem really but I would like to know where can
>> > > this behaviour come handy.
>> > >
>> > > Cheers!
>> > >
>>
>
>

Re: [julia-users] 'do' notation in documentation?

2016-04-17 Thread Andrew Gibb
I have a little document which I add to periodically called "Julia 
unsearchables" where I note down which things can't be found in the docs 
without asking someone. I do hope, once my Thesis is done, that I can 
contribute it to the documentation. 

[julia-users] Re: DataFrames: How to change column name?

2016-04-13 Thread Andrew McKay
minor update: 

rename!(df, :x1, :randomnumbers)


On Monday, December 9, 2013 at 2:27:10 AM UTC-8, Johan Sigfrids wrote:
>
> There is a rename!() function which does this. You can use it ilke this:
> df = DataFrame(x1=rand(5)
> rename!(df, "x1", "randomnubers")
>
>
>
> On Monday, December 9, 2013 11:50:05 AM UTC+2, RecentConvert wrote:
>>
>> Data has been loaded using DataFrames but since the column names are 
>> *not* on the first line the resulting data set has default column names. 
>> How do I change the names after the fact?
>>
>

[julia-users] Re: Can Julia handle a large number of methods for a function (and a lot of types)?

2016-04-02 Thread Andrew Keller
Hi Oliver,

Regarding your first question posed in this thread, I think you might be 
interested in this documentation 
<http://docs.julialang.org/en/latest/devdocs/functions/> of how functions 
will work in Julia 0.5 if you haven't read it already. There is some 
discussion of how methods are dispatched, as well as compiler efficiency 
issues. I hope you don't mind that I've tried out your setindex and 
getindex approach. It is very pleasant to use but I have not benchmarked it 
in any serious way, mainly because I'm not sure what a sensible benchmark 
would be. If you'd like me to try out something I'll see what I can do.

It sounds like you have probably been thinking deeply about instrument 
control for a much longer period of time than I have. I'll write you again 
once I've gotten our codebase more stable and documented, and I welcome 
criticism. I haven't given much thought to coordinated parallel access yet 
but I agree that it will be important. A short summary is that right now we 
have one package containing a module with submodules for each instrument. 
Each instrument has an explicit type and most methods are generated using 
metaprogramming based off some template for each instrument type. Most 
instrument types are subtypes of `InstrumentVISA`, and there are a few 
methods that are assumed to work on all instruments supporting VISA. I must 
say, it is not obvious what the best type hierarchy is, and I could easily 
believe that traits are a better way to go when describing the 
functionality of instruments. You can find any number of discussion 
threads, GitHub issues, etc. on traits in Julia but I don't know what 
current consensus is.

Unitful.jl and SIUnits.jl globally have the same approach: encode the units 
in the type signature of a value. Accordingly, Unitful.jl should have great 
performance at "run-time," and is a design goal. Anytime some operation 
with units is happening in a tight loop, it should be fast. I have only had 
time to look at some of the generated machine code but what I've looked at 
is the same or very close to what is generated for operations without 
units. I have not optimized the "compile-time" performance at all. Probably 
the first time a new unit is encountered, some modest performance penalty 
results. I'd like to relook at how I'm handling dimensions (length, time, 
etc.) because in retrospect I think I'm doing some odd things. An open 
question is how one could dispatch on the dimensions (e.g. x::Length). I 
tried this once and it sort of worked but the type-gymnastics became very 
ugly, so maybe something like traits would be better.

Last I checked, the two packages are different in that Unitful.jl supports: 
rational powers of the units (you can do Hz^(1/2), useful for noise 
spectra); non-SI units like imperial units; the ability to add new units on 
the fly with macros; preservation of exact conversions. My package only 
supports Julia 0.5 though. I think the differences between the packages 
arise mainly because SIUnits.jl was written when Julia was less mature. 
SIUnits.jl is still sufficient for many needs and is remarkably clever.

Thanks for linking to your code. I have no experience with Scala but I will 
take a look at it.

Best,
Andrew


On Saturday, April 2, 2016 at 4:45:04 AM UTC-7, Oliver Schulz wrote:
>
> Hi Andrew,
>
> sorry, I overlooked your post, somehow ...
>
> On Monday, March 28, 2016 at 3:47:42 AM UTC+2, Andrew Keller wrote:
>>
>> Just a heads up that I've been working on something similar for the past 
>> six months or so.
>>
>
> Very nice - I'm open collaborate on this topic, of course. At the moment, 
> we're still actively using my Scala/Akka based system  
> https://github.com/daqcore/daqcore-scala) in our group, but I'm starting 
> to map out it's Julia replacement. daqcore-scala currently implements some 
> VME flash-ADCs and some slow-control devices like a Labjack, some HV 
> Supplies, vaccuum pressure gauges, etc., but the architecture is not device 
> specific. The Julia version is intended to be equally generic, but will 
> initially target the same/similar devices. I'd like to make it more 
> modular, though, it doesn't all have to end up in one package.
>
> One requirement will be coordinated parallel access to the instruments 
> from different places of the code (can't have one Task sending a command to 
> an instrument while another task isn't done with his query - which may span 
> multiple I/O operations - yet). Also multi-host operation for 
> high-throughput applications might become necessary at some point. The 
> basis for both came for free with Scala/Akka actors, and I started 
> Actors.jl (https://github.com/daqcore/Actors.jl) with this in mind. 
> Another things that 

[julia-users] Re: Can Julia handle a large number of methods for a function (and a lot of types)?

2016-03-27 Thread Andrew Keller
Hi Oliver,

Just a heads up that I've been working on something similar for the past 
six months or so. I'm fortunate enough to be starting a new project where I 
have no legacy code and the freedom to play around a bit. I think we have 
similar ideas: properties are abstract / singleton types in my code as 
well. I really like your idea of using `getindex` and `setindex!` but I 
didn't think of that initially (sigh...) and at present have defined 
methods like:

configure(ins::Instrument,::Type{DeviceFeature}, value, channel::Integer=1)
inspect(ins::Instrument, ::Type{DeviceFeature})

My approach would have the same scaling issues you worry about. I hope it 
won't be such an issue and it would be kind of a shame to resort to 
separate functions for each instrument property in Julia, so I'm just going 
full speed ahead.

My code is not yet ready for public use and is not stable, but the source 
is in my repo <https://github.com/ajkeller34/PainterQB.jl> if you're 
curious. I should caution that its a bit of a mess right now and the 
documentation is not up-to-date, so some of it reflects things that are not 
be true anymore, or ideas I have abandoned / improved upon. Seeing as how I 
need to do some real science soon I hope the source code will stabilize 
within a few weeks. I would welcome discussion if you want to combine 
efforts on this. I'm not sure if I want to invite PRs yet until I have the 
project in a more stable state but discussion is always good.

As I've been working on this since I started learning Julia, there are a 
number of things I did early on that I regret and am now fixing. One thing 
you'll find becomes annoying quickly is namespace issues: if you want to 
put different instruments in different modules, shared properties or 
functions may require a lot of careful imports/exports that tend to break 
easily. I'd suggest that rather than hard coding properties in source 
files, you store what you need in some standard external format. I'm now 
storing my SCPI codes, Julia property names, argument types, etc. in a JSON 
file that Julia parses to generate code with metaprogramming. With this 
approach I believe I'll be able to write code to scrape instrument manuals, 
and thereby generate code for new instruments semiautomatically. You'd 
still need to decide on property names and such, and write regular 
expressions to scrape the manuals, but that's not so bad.

Finally, I think some really great things could be done with this sort of 
code if Julia had better native support for physical units. For instance, 
on a vector network analyzer you could write something like:

vna[Frequency] = 4GHz:1MHz:10GHz

and thereby set frequency start, stop, and step at once. I have a units 
package for Julia v0.5 that I'm also developing when I have the time (
Unitful.jl <https://github.com/ajkeller34/Unitful.jl>), but there are some 
tweaks to Base that would help the package play nicely. If that sounds 
appealing, it'd be great if you could voice your support for that, e.g. 
here: https://github.com/JuliaLang/julia/pull/15394

Best wishes,
Andrew Keller

On Sunday, March 27, 2016 at 5:34:31 AM UTC-7, Oliver Schulz wrote:
>
> Hello,
>
> I'm designing a scheme to control and read out hardware devices (mostly 
> lab instruments) via Julia. The basic idea is that each device type is 
> characterized by a set of features/properties that have a value type and 
> dimensionality and that can be read-only or read/write (a bit like SCPI or 
> VISA, conceptually).
>
> Features/properties would be things like "InputVoltage" for a multimeter, 
> "Waveform", "TriggerLevel", etc. for an oscilloscope, and so on. They would 
> be represented by (abstract or singleton) types. Obviously, certain 
> features would be supported by many different device classes.
>
> For the API, I was thinking of an interface like this:
>
> current_feature_value = mydevice[SomeDeviceFeature]
> mydev[SomeDeviceFeature] = new_feature_value
>
> resp. things like
>
> mydev[SomeMultiChannelFeature, channel]
>
> for feature with multiple instances (e.g. in muli-channel devices). The 
> implementation would, of course, be
>
> getindex(dev::SomeDeviceType, ::Type{SomeDeviceFeature}) = ...
> setindex!(dev::SomeDeviceType, value, ::Type{SomeDeviceFeature}) = ...
>
> etc. This would mean having a large number of classes for all features 
> (let's say around 200), and an even larger number of methods ( 
> O(number_of_device_classes * typical_number_of_features_per_device), let's 
> say around 5000) for getindex and setindex!. Can Julia handle this, or 
> would this result in long compile times and/or bad performance?
>
> An alternative would be, of course, to represent each feature by a 
> specific se

[julia-users] npm, left_pad and julia

2016-03-24 Thread andrew cooke
i guess people are aware of the fuss with npm and left_pad (if not, see 
https://www.metafilter.com/158105/Ive-liberated-my-modules for a summary).

since julia's package system is based around github i wondered if it could 
also suffer from similar problems?  if so, is this being addressed somewhow?

cheers,
andrew



[julia-users] Reusing memory with FileIO

2016-02-23 Thread Andrew Gibb
Hi,

I'm doing work reading image sequences using Images.jl. I haven't been able 
to find in the docs for Images or FileIO a reference for how to load an 
image into pre-allocated memory. Does such a thing exist? 

Thanks

Andy


[julia-users] Re: Unitful.jl for physical units

2016-02-17 Thread Andrew Keller
Hi Jake,

I agree with Jeffrey's response mostly, but want to clarify that Unitful is 
not strictly focused on the SI system. You'll see that you can use units of 
acre-feet, very much not an SI unit, which could be useful if you happen to 
manage a water reservoir in the United States, I guess.

If you were instead asking whether or not you could write methods that 
dispatch on the dimensions of a unit, the current answer is no, although 
maybe that could change eventually.

Best,
Andrew

On Wednesday, February 17, 2016 at 3:25:37 AM UTC-8, Jeffrey Sarnoff wrote:
>
> Jake,
>
> Julia's type system is well suited to do just that.  Unitful is focused on 
> units in SI system, things like Meters, Kilograms and Joules.
>
> One approach to abstract units like size, where size may be relative to 
> the number of pixels on a screen or the width of a page,
> is define your own type, a kind of size relative to something.  In your 
> example, s is not a unit of measure (strictly speaking);
> s is a quantity interpreted in terms of some absolute or relative unit of 
> measure -- 5 pixels, 1/4 page.  Because pixels and pages
> are not always the same number of, say, millimeters, using SI units for 
> that abstraction likely is not what you want.
>
> If you want more guidance, please give some more context.
>
> On Wednesday, February 17, 2016 at 4:43:13 AM UTC-5, Jake Rosoman wrote:
>>
>> Is it possible to talk about abstract types of units like size. e.g 
>> `drawline(s::Size) = ...`?
>>
>> On Saturday, February 13, 2016 at 9:23:22 AM UTC+13, Andrew Keller wrote:
>>>
>>> I'm happy to share a package I wrote for using physical units in Julia, 
>>> Unitful.jl <https://www.github.com/ajkeller34/Unitful.jl>. Much credit 
>>> and gratitude is due to Keno Fischer for the SIUnits.jl 
>>> <https://www.github.com/keno/SIUnits.jl> package which served as my 
>>> inspiration. This is a work in progress, but I think perhaps a serviceable 
>>> one depending on what you're doing. 
>>>
>>> Like SIUnits.jl, this package encodes units in the type signature to 
>>> avoid run-time performance penalties. From there, the implementations 
>>> diverge. The package is targeted to Julia 0.5 / master, as there are some 
>>> limitations with how promote_op is used in Julia 0.4 (#13803) 
>>> <https://github.com/JuliaLang/julia/pull/13803>. I decided it wasn't 
>>> worth targeting 0.4 if the behavior would be inconsistent. 
>>>
>>> Some highlights include:
>>>
>>>- Non-SI units are treated on the same footing as SI units, with 
>>>only a few exceptions (unit conversion method). Use whatever weird 
>>>units you want.
>>>- Support for units like micron / (meter Kelvin), where some of the 
>>>units could cancel out but you don't necessarily want them to.
>>>- Support for LinSpace and other Range types. Probably there are 
>>>still some glitches to be found, though.
>>>- Support for rational exponents of units.
>>>- Some tests (see these for usage examples).
>>>
>>> Please see the documentation for a comprehensive discussion, including 
>>> issues / to do list, as well as how to add your own units, etc.
>>> Comments and feedback are welcome.
>>>
>>> Best,
>>> Andrew Keller
>>>
>>

[julia-users] Re: Unitful.jl for physical units

2016-02-16 Thread Andrew Keller
Hi Yousef, 

Yes, I anticipate that there will be a simpler way to add units sometime, 
probably via a macro. However, my concern first and foremost is to make 
sure that operations involving units do not fail unexpectedly, particularly 
where ranges, arrays, and matrices are involved. This is really key for 
widespread adoption, I think, as well as for my own application. I'd also 
like to make it so that exact conversions are preserved where possible (see 
issue #3 on the Github page). In the meantime, you're welcome to submit a 
pull request for the units you need, or just modify your local copy.

Thanks to everyone for the positive feedback! It looks like there was a 
recent commit to Julia master that may introduce some breakage—I'll 
prioritize fixing that once the Mac OS X binary for Julia is updated. I may 
fix it sooner since I was able to build master.

Best
Andrew

On Tuesday, February 16, 2016 at 6:11:52 AM UTC-8, yousef.k...@gmail.com 
wrote:
>
> Hello Andrew,
>
> Your work is very promising. If you have ever dealt with oil field 
> equations, then you'll know that units are a headache to deal with.
> Your package should make life much easier.
>
> I'm curious, however, as if you'll implement an easier means of defining 
> new units.
> Something like a function that takes a unit, its base factor, and its 
> conversion to SI units, then gets it working in Unitful.
>
> Thanks again for your work,
> Yousef
>


[julia-users] Unitful.jl for physical units

2016-02-12 Thread Andrew Keller
I'm happy to share a package I wrote for using physical units in Julia, 
Unitful.jl <https://www.github.com/ajkeller34/Unitful.jl>. Much credit and 
gratitude is due to Keno Fischer for the SIUnits.jl 
<https://www.github.com/keno/SIUnits.jl> package which served as my 
inspiration. This is a work in progress, but I think perhaps a serviceable 
one depending on what you're doing. 

Like SIUnits.jl, this package encodes units in the type signature to avoid 
run-time performance penalties. From there, the implementations diverge. 
The package is targeted to Julia 0.5 / master, as there are some 
limitations with how promote_op is used in Julia 0.4 (#13803) 
<https://github.com/JuliaLang/julia/pull/13803>. I decided it wasn't worth 
targeting 0.4 if the behavior would be inconsistent. 

Some highlights include:

   - Non-SI units are treated on the same footing as SI units, with only a 
   few exceptions (unit conversion method). Use whatever weird units you 
   want.
   - Support for units like micron / (meter Kelvin), where some of the 
   units could cancel out but you don't necessarily want them to.
   - Support for LinSpace and other Range types. Probably there are still 
   some glitches to be found, though.
   - Support for rational exponents of units.
   - Some tests (see these for usage examples).

Please see the documentation for a comprehensive discussion, including 
issues / to do list, as well as how to add your own units, etc.
Comments and feedback are welcome.

Best,
Andrew Keller


[julia-users] Re: ccall with an opaque struct

2016-02-11 Thread Andrew Adare
Yes!! That works:

julia> p = Ref{Ptr{Void}}()
Base.RefValue{Ptr{Void}}(Ptr{Void} @0x)

julia> ccall((:sp_get_port_by_name, "libserialport"),Cint,(Cstring, Ref{Ptr{
Void}}), "/dev/cu.usbmodem1413", p)
0

julia> bytestring(ccall((:sp_get_port_name, "libserialport"), Ptr{Cchar}, (
Ptr{Void},), p[]))
"/dev/cu.usbmodem1413"

julia> bytestring(ccall((:sp_get_port_description, "libserialport"), Ptr{
Cchar}, (Ptr{Void},), p[]))
"STM32 STLink"
and so forth.

Based on your help, I'm now also able to ccall a function with this 
signature

enum sp_return sp_list_ports(struct sp_port ***list_ptr);

as folllows:

julia> ports = Ref{Ptr{Ptr{Void}}}()

julia> ccall((:sp_list_ports, "libserialport"), Cint, (Ref{Ptr{Ptr{Void
}}},), ports)

julia> ports
Base.RefValue{Ptr{Ptr{Void}}}(Ptr{Ptr{Void}} @0x7ffe8d8f2a00)

julia> a = pointer_to_array(ports[], (2,), true)
2-element Array{Ptr{Void},1}:
 Ptr{Void} @0x7ffe8d9617e0
 Ptr{Void} @0x7ffe8d989990
julia> bytestring(ccall((:sp_get_port_name, "libserialport"), Ptr{Cchar}, (
Ptr{Void},), a[1]))
"/dev/cu.Bluetooth-Incoming-Port"

julia> bytestring(ccall((:sp_get_port_name, "libserialport"), Ptr{Cchar}, (
Ptr{Void},), a[2]))
"/dev/cu.usbmodem1413"

Maybe that's useful to others who, like me, find call signatures like this 
to be a bit tricky.

Many thanks,
Andrew


On Thursday, February 11, 2016 at 8:35:37 AM UTC-7, Andrew Adare wrote:
>
> I'm trying to call out to some library functions from libserialport 
> <http://sigrok.org/wiki/Libserialport>. It is a small c library that 
> makes extensive use of opaque structs in the API. In c I can do the 
> following:
>
> #include 
> #include 
>
>
> // Compile:
> // gcc -o example example.c $(pkg-config --cflags --libs libserialport)
>
>
> int main(int argc, char *argv[])
> {
>   char *uri = argv[1];
>   struct sp_port *port;
>
>
>   if (sp_get_port_by_name(uri, &port) == SP_OK)
>   {
> char *portname = sp_get_port_name(port);
> char *portdesc = sp_get_port_description(port);
> printf("Port name: %s\n", portname);
> printf("Description: %s\n", portdesc);
>   }
>   else
> printf("Could not find %s\n", uri);
>   return 0;
> }
>
>
> As an example, if I run this with an ARM STM32 microcontroller plugged in 
> to my USB port, I get the following correct output (on OS X):
>
> $ ./example /dev/cu.usbmodem1413
> Port name: /dev/cu.usbmodem1413
> Description: STM32 STLink
>
> I want to call sp_get_port_by_name from Julia. It has this signature:
>
> enum sp_return sp_get_port_by_name(const char *portname, struct sp_port **
> port_ptr);
>
> The ccall docs advise declaring an empty type as a placeholder for an 
> opaque struct, then passing in a reference to it. I've tried many wrong 
> things, such as the following:
>
> julia> type Port end
>
> julia> p = Ref{Port}()
> Base.RefValue{Port}(#undef)
>
> julia> ccall((:sp_get_port_by_name, "libserialport"),Cint,(AbstractString, 
> Ref{Ref{Port}}),"/dev/cu.usbmodem1413",p)
> 0
>
> julia> p
> Base.RefValue{Port}(Segmentation fault: 11
>
>
> Any tips? The double pointer is causing me some confusion.
>
> Andrew
>


[julia-users] Re: ccall with an opaque struct

2016-02-11 Thread Andrew Adare
Thanks for the suggestions, but neither appear to work. The first produces 
a nonzero pointer,

julia> p = Ptr{Int}[0]
1-element Array{Ptr{Int64},1}:
 Ptr{Int64} @0x


julia> ccall((:sp_get_port_by_name, "libserialport"),Cint,(AbstractString, 
Ref{Ptr{Int}}),"/dev/cu.usbmodem1413",p)
0


julia> p
1-element Array{Ptr{Int64},1}:
 Ptr{Int64} @0x7fb4d8614f70


but the allocated buffer appears to be gibberish:

julia> bytestring(ccall((:sp_get_port_name, "libserialport"), Ptr{Cchar}, (
typeof(p),), p))
"pOaش\x7f"


julia> bytestring(ccall((:sp_get_port_name, "libserialport"), Ptr{Cchar}, (
Ptr{Int},), p[]))
"\x10!}\b\x01"


And unfortunately the second approach just produces a null pointer:

julia> immutable Port
   portpointer::Ptr{Void}
   end


julia> p = Port(0)
Port(Ptr{Void} @0x)


julia> ccall((:sp_get_port_by_name, "libserialport"),Cint,(AbstractString, 
Ref{Port}),"/dev/cu.usbmodem1413",p)
0


julia> p
Port(Ptr{Void} @0x)


So I'm still stumped...any further help is appreciated.

Andrew


On Thursday, February 11, 2016 at 9:24:07 AM UTC-7, Lutfullah Tomak wrote:
>
> p = Ptr{Int}[0]
> ccall((:sp_get_port_by_name, "libserialport"),Cint,(AbstractString, 
> Ref{Ptr{Int}}),"/dev/cu.usbmodem1413",p)
>
> or
>
> immutable Port
> portpointer::Ptr{Void}
> end
> p=Port(0)
> ccall((:sp_get_port_by_name, "libserialport"),Cint,(AbstractString, 
> Ref{Port}),"/dev/cu.usbmodem1413",p)
>
>

[julia-users] ccall with an opaque struct

2016-02-11 Thread Andrew Adare
I'm trying to call out to some library functions from libserialport 
<http://sigrok.org/wiki/Libserialport>. It is a small c library that makes 
extensive use of opaque structs in the API. In c I can do the following:

#include 
#include 


// Compile:
// gcc -o example example.c $(pkg-config --cflags --libs libserialport)


int main(int argc, char *argv[])
{
  char *uri = argv[1];
  struct sp_port *port;


  if (sp_get_port_by_name(uri, &port) == SP_OK)
  {
char *portname = sp_get_port_name(port);
char *portdesc = sp_get_port_description(port);
printf("Port name: %s\n", portname);
printf("Description: %s\n", portdesc);
  }
  else
printf("Could not find %s\n", uri);
  return 0;
}


As an example, if I run this with an ARM STM32 microcontroller plugged in 
to my USB port, I get the following correct output (on OS X):

$ ./example /dev/cu.usbmodem1413
Port name: /dev/cu.usbmodem1413
Description: STM32 STLink

I want to call sp_get_port_by_name from Julia. It has this signature:

enum sp_return sp_get_port_by_name(const char *portname, struct sp_port **
port_ptr);

The ccall docs advise declaring an empty type as a placeholder for an 
opaque struct, then passing in a reference to it. I've tried many wrong 
things, such as the following:

julia> type Port end

julia> p = Ref{Port}()
Base.RefValue{Port}(#undef)

julia> ccall((:sp_get_port_by_name, "libserialport"),Cint,(AbstractString, 
Ref{Ref{Port}}),"/dev/cu.usbmodem1413",p)
0

julia> p
Base.RefValue{Port}(Segmentation fault: 11


Any tips? The double pointer is causing me some confusion.

Andrew


Re: [julia-users] if .. else on a whole array

2016-02-11 Thread Andrew
You can try the @vectorize_1arg macro. Just define u2 as you did, then
@vectorize_1arg Number u2.

Or, if you don't want to use the macro, you can try a for loop.  Example:

function u2(x::AbstractArray)
  out = similar(x)
  for i in eachindex(x)
out[i] = u2(x[i])
  end
  return out
end


On Wednesday, February 10, 2016 at 7:15:30 AM UTC-5, Ferran Mazzanti wrote:
>
> Thanks Mauro...
>
> using two methods was the first thing I thought, but I strongly dislike 
> the idea because I'd have to maintain two different functions
> to do the same, which doubles the possibility of introducing bugs. I like 
> the idea of keeping the main function simple and unique,
> so if I change something the changes apply to all calculations in the same 
> way.
>
> Best regards,
>
> Ferran.
>
> On Wednesday, February 10, 2016 at 12:11:00 PM UTC+1, Mauro wrote:
>>
>> Probably cleanest would be to make two methods, one for scalars, one for 
>> arrays.  For the array one just loop. 
>>
>> This also works, but returns an array for scalar input (type inference 
>> should work once wrapped in a function): 
>>
>> julia> x = 5 
>> 5 
>>
>> julia> [ xi>5 ? 0:1 for xi in x] 
>> 1-element Array{Any,1}: 
>>  1 
>>
>> julia> x = 1:10 
>> 1:10 
>>
>> julia> [ xi>5 ? 0:blah for xi in x] 
>> 10-element Array{Any,1}: 
>>  1 
>>  1 
>>  1 
>>  1 
>>  1 
>>  0 
>>  0 
>>  0 
>>  0 
>>  0 
>>
>>
>> On Wed, 2016-02-10 at 11:58, Ferran Mazzanti  
>> wrote: 
>> > Hi folks, 
>> > 
>> > probably a stupid question but can't find the answer, so please help if 
>> you 
>> > can :) 
>> > I would like to evaluate a if.. else.. statement on a whole array. 
>> Actually 
>> > it's a bit more complicated, as I have a function that 
>> > previously was 
>> > 
>> > function u2(x) 
>> > return 0.5*(u2_0(x)+u2_0(Lbox-x))-u2_0(Lbox/2) 
>> > end; 
>> > 
>> > for some other defined function u2(x) and constant L_box. The thing is 
>> that 
>> > I could directly evaluate that on a scalar x and on an array. 
>> > Now I have to change it and check if x is smaller than Lbox/2, 
>> returning 
>> > the same as above if it is, or 0 otherwise. 
>> > I tried something of the form 
>> > 
>> > function u2(x) 
>> > return x.>Lbox/2 ? 0 : 0.5*(u2_0(x)+u2_0(Lbox-x))-u2_0(Lbox/2) 
>> > end; 
>> > 
>> > but that complains when x is an array. What would be the easiest way to 
>> > achieve this? It should work for both scalars and arrays... 
>> > 
>> > Best regards and thanks, 
>> > 
>> > Ferran. 
>>
>

[julia-users] Strangely formatted HTTP response (Pkg.publish())

2016-02-03 Thread andrew cooke

anyone else seeing this?

INFO: Validating METADATA
INFO: Pushing ParserCombinator permanent tags: v1.7.5, v1.7.6
INFO: Submitting METADATA changes
INFO: Forking JuliaLang/METADATA.jl to andrewcooke
ERROR: strangely formatted HTTP response
 in curl at pkg/github.jl:53
 in req at pkg/github.jl:99
 in POST at pkg/github.jl:112
 in fork at pkg/github.jl:125
 in pull_request at pkg/entry.jl:327
 in publish at pkg/entry.jl:394
 in anonymous at pkg/dir.jl:31
 in cd at file.jl:22
 in cd at pkg/dir.jl:31
 in publish at pkg.jl:61

i've deleted my token thing on guthub and retried.

andrew


Re: [julia-users] are tasks threads in 0.4?

2016-02-03 Thread andrew cooke

this turned out to be two problems, one the "real" bug, and the other, 
causing the thread lock, was a "debugging" print within code called by 
print.  which i think has bitten me before...

On Saturday, 30 January 2016 14:11:29 UTC-3, andrew cooke wrote:
>
>
> i guess that makes sense.  thanks.  it's not clear to me why there's a 
> deadlock but the code looks pretty ugly.  i'll try simplifying it and see 
> how it goes.  andrew
>
> On Saturday, 30 January 2016 02:05:15 UTC-3, Yichao Yu wrote:
>>
>> On Fri, Jan 29, 2016 at 10:53 PM, andrew cooke  
>> wrote: 
>> > 
>> > i've been away from julia for a while so am not up-to-date on changes, 
>> and 
>> > am looking at an odd problem. 
>> > 
>> > i have some code, which is messier and more complex than i would like, 
>> which 
>> > is called to print a graph of values.  the print code uses tasks.  in 
>> 0.3 
>> > this works, but in 0.4 the program sits, using no CPU. 
>> > 
>> > if i dump the stack (using gstack PID) i see: 
>> > 
>> > Thread 4 (Thread 0x7efe3b6bb700 (LWP 1709)): 
>> > #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
>> > /lib64/libpthread.so.0 
>> > #1  0x7efe3bf62b5b in blas_thread_server () from 
>> > /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so 
>> > #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0 
>> > #3  0x7f004231604d in clone () from /lib64/libc.so.6 
>> > Thread 3 (Thread 0x7efe3aeba700 (LWP 1710)): 
>> > #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
>> > /lib64/libpthread.so.0 
>> > #1  0x7efe3bf62b5b in blas_thread_server () from 
>> > /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so 
>> > #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0 
>> > #3  0x7f004231604d in clone () from /lib64/libc.so.6 
>> > Thread 2 (Thread 0x7efe3a6b9700 (LWP 1711)): 
>> > #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
>> > /lib64/libpthread.so.0 
>> > #1  0x7efe3bf62b5b in blas_thread_server () from 
>> > /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so 
>> > #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0 
>> > #3  0x7f004231604d in clone () from /lib64/libc.so.6 
>> > Thread 1 (Thread 0x7f0044710740 (LWP 1708)): 
>> > #0  0x7f0042e8120d in pause () from /lib64/libpthread.so.0 
>> > #1  0x7f0040a190fe in julia_wait_17546 () at task.jl:364 
>> > #2  0x7f0040a18ea1 in julia_wait_17544 () at task.jl:286 
>> > #3  0x7f0040a40ffc in julia_lock_18599 () at lock.jl:23 
>> > #4  0x7efe3ecdbeb7 in ?? () 
>> > #5  0x7ffd3e6ad2c0 in ?? () 
>> > #6  0x in ?? () 
>> > 
>> > which looks suspiciously like some kind of deadlock. 
>> > 
>> > but i am not using threads, myself.  just tasks. 
>>
>> Tasks are not threads. You can see the threads are started by openblas. 
>>
>> IIUC tasks can have dead lock too, depending on how you use it. 
>>
>> > 
>> > hence the question.  any pointers appreciated. 
>> > 
>> > thanks, 
>> > andrew 
>> > 
>>
>

[julia-users] Re: ANN: new blog post on array indexing, iteration, and multidimensional algorithms

2016-02-02 Thread Andrew Gibb
Agreed! A very clear introduction which shows off some of the power of 
these new Array iterators. 

Thanks, Tim.

Andy

On Monday, 1 February 2016 18:55:06 UTC, Tim Holy wrote:
>
> It's come to my attention that some of the exciting capabilities of julia 
> 0.4 
> for indexing, iteration, and writing generic multidimensional algorithms 
> may 
> be under-documented. This new blog post 
>
> http://julialang.org/blog/2016/02/iteration/ 
>
> aims to provide a "gentle tutorial" illustrating how to use these 
> capabilities. 
>
> Best, 
> --Tim 
>
>

[julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2016-01-30 Thread Andrew
I just ran several of these benchmarks using the code and compilation flags 
available 
at https://github.com/jesusfv/Comparison-Programming-Languages-Economics . 
On my computer Julia is faster than C, C++, and Fortran, which I find 
surprising, unless some really dramatic optimization happened since 0.2.

My results are, on a Linux machine:

Julia 0.4.2: 1.44s
Julia 0.3.13 1.60s
C, gcc 4.8.4: 1.65s
C++, g++: 1.64s
Fortran, gfortran 4.8.4: 1.65s
Matlab R2015b : 5.65s
Matlab R2015b, Mex inside loop: 1.83s
Python 2.7: 50.9s 
Python 2.7 Numba: 1.88s with warmup

It's possible there's something bad about my configuration as I don't 
normally use C and Fortran. In the paper their C/Fortran code runs in 0.7s, 
I don't think their computer is twice as fast as mine, but maybe it is. 

On Monday, June 16, 2014 at 11:52:07 AM UTC-4, Florian Oswald wrote:
>
> Dear all,
>
> I thought you might find this paper interesting: 
> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
>
> It takes a standard model from macro economics and computes it's solution 
> with an identical algorithm in several languages. Julia is roughly 2.6 
> times slower than the best C++ executable. I was bit puzzled by the result, 
> since in the benchmarks on http://julialang.org/, the slowest test is 
> 1.66 times C. I realize that those benchmarks can't cover all possible 
> situations. That said, I couldn't really find anything unusual in the Julia 
> code, did some profiling and removed type inference, but still that's as 
> fast as I got it. That's not to say that I'm disappointed, I still think 
> this is great. Did I miss something obvious here or is there something 
> specific to this algorithm? 
>
> The codes are on github at 
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>
>
>

Re: [julia-users] are tasks threads in 0.4?

2016-01-30 Thread andrew cooke

i guess that makes sense.  thanks.  it's not clear to me why there's a 
deadlock but the code looks pretty ugly.  i'll try simplifying it and see 
how it goes.  andrew

On Saturday, 30 January 2016 02:05:15 UTC-3, Yichao Yu wrote:
>
> On Fri, Jan 29, 2016 at 10:53 PM, andrew cooke  > wrote: 
> > 
> > i've been away from julia for a while so am not up-to-date on changes, 
> and 
> > am looking at an odd problem. 
> > 
> > i have some code, which is messier and more complex than i would like, 
> which 
> > is called to print a graph of values.  the print code uses tasks.  in 
> 0.3 
> > this works, but in 0.4 the program sits, using no CPU. 
> > 
> > if i dump the stack (using gstack PID) i see: 
> > 
> > Thread 4 (Thread 0x7efe3b6bb700 (LWP 1709)): 
> > #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
> > /lib64/libpthread.so.0 
> > #1  0x7efe3bf62b5b in blas_thread_server () from 
> > /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so 
> > #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0 
> > #3  0x7f004231604d in clone () from /lib64/libc.so.6 
> > Thread 3 (Thread 0x7efe3aeba700 (LWP 1710)): 
> > #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
> > /lib64/libpthread.so.0 
> > #1  0x7efe3bf62b5b in blas_thread_server () from 
> > /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so 
> > #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0 
> > #3  0x7f004231604d in clone () from /lib64/libc.so.6 
> > Thread 2 (Thread 0x7efe3a6b9700 (LWP 1711)): 
> > #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
> > /lib64/libpthread.so.0 
> > #1  0x7efe3bf62b5b in blas_thread_server () from 
> > /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so 
> > #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0 
> > #3  0x7f004231604d in clone () from /lib64/libc.so.6 
> > Thread 1 (Thread 0x7f0044710740 (LWP 1708)): 
> > #0  0x7f0042e8120d in pause () from /lib64/libpthread.so.0 
> > #1  0x7f0040a190fe in julia_wait_17546 () at task.jl:364 
> > #2  0x7f0040a18ea1 in julia_wait_17544 () at task.jl:286 
> > #3  0x7f0040a40ffc in julia_lock_18599 () at lock.jl:23 
> > #4  0x7efe3ecdbeb7 in ?? () 
> > #5  0x7ffd3e6ad2c0 in ?? () 
> > #6  0x in ?? () 
> > 
> > which looks suspiciously like some kind of deadlock. 
> > 
> > but i am not using threads, myself.  just tasks. 
>
> Tasks are not threads. You can see the threads are started by openblas. 
>
> IIUC tasks can have dead lock too, depending on how you use it. 
>
> > 
> > hence the question.  any pointers appreciated. 
> > 
> > thanks, 
> > andrew 
> > 
>


[julia-users] Re: Multiple dispatch doesn't work for named parameters?

2016-01-30 Thread Andrew
I see in the 
manual 
http://docs.julialang.org/en/latest/manual/methods/#note-on-optional-and-keyword-arguments
 
that:

"Keyword arguments behave quite differently from ordinary positional 
arguments. In particular, they do not participate in method dispatch. 
Methods are dispatched based only on positional arguments, with keyword 
arguments processed after the matching method is identified."

I don't think you defined 2 different methods. You defined one method 
taking no positional arguments and a bunch of keyword arguments. Then you 
overwrote that method.

On Saturday, January 30, 2016 at 11:37:51 AM UTC-5, Daniel Carrera wrote:
>
> This is very weird. It looks like multiple dispatch doesn't work at least 
> in some cases. Look:
>
> julia> semimajor_axis(;M=1,P=10) = (P^2 * M)^(1/3)
> semimajor_axis (generic function with 1 method)
>
> julia> semimajor_axis(;α=25,mp=1,M=1,d=10) = α * d * M/mp / 954.9
> semimajor_axis (generic function with 1 method)
>
> julia> semimajor_axis(M=3,P=10)
> ERROR: unrecognized keyword argument "P"
>
>
> I do understand that it may be risky to have two functions with the same 
> name and different named parameters (I really wish Julia allowed me to make 
> some/all named parameters mandatory), but I was expecting that my code 
> would work. I clearly defined a version of the function that has a named 
> parameter called "P".
>
> Does anyone know why this happens and what I can do to fix it? Am I 
> basically required to either use different function names, or give up on 
> using named parameters for everything?
>
> Cheers,
> Daniel.
>


[julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-30 Thread Andrew
When I say Dierckx isn't a bottleneck for me, I mean my own code spends 
most of its time doing things other than interpolation, like solving 
non-linear equations and other calculations. All your loop does is 
interpolate, so there it must be the bottleneck. 

For the expectation, you can reuse the same vector. You could also 
devectorize it and compute the expectation point by point, though I don't 
know if this any faster.

Maybe you have problems with unnecessary allocation and global variables in 
the original code.

On Saturday, January 30, 2016 at 8:59:27 AM UTC-5, 
pokerho...@googlemail.com wrote:
>
> @Tomas: maybe check out Numerical Recipes in C: The Art of Scientific 
> Computing, 2nd edition. There is also an edition for Fortran. The code that 
> I use in C is basically from there. 
>
> @Andrew: The xprime needs to be in the loop. I just made it ones to 
> simplify but normally it changes every iteration. (In the DP problem, the 
> loop is calculating an expecation and xprime is the possible future value 
> of the state variable for each state of the world). Concerning the Dierckx 
> package. I don't know about the general behaviour but for my particular 
> problem (irregular grid + cubic spline) it is very slow. Run the following 
> code:
>
> using Dierckx
>
> spacing=1.5
> Nxx = 300
> Naa = 350
> Nalal = 200
> sigma = 10
> NoIter = 1
>
> xx=Array(Float64,Nxx)
> xmin = 0.01
> xmax = 400
> xx[1] = xmin
> for i=2:Nxx
> xx[i] = xx[i-1] + (xmax-xx[i-1])/((Nxx-i+1)^spacing)
> end
>
> f_util(c) =  c.^(1-sigma)/(1-sigma)
> W=Array(Float64,Nxx,1)
> W[:,1] = f_util(xx)
>
>
> spl = Spline1D(xx,W[:,1])
>
> function performance2(NoIter::Int64)
> W_temp = Array(Float64,Nalal*Naa)
> W_temp2 = Array(Float64,Nalal,Naa)
> xprime=Array(Float64,Nalal,Naa)
> for banana=1:NoIter
> xprime=ones(Nalal,Naa)
> W_temp = spl(xprime[:])
> end
> W_temp2 = reshape(W_temp,Nalal,Naa)
> end
>
> @time performance2(1)
>
> 30.878093 seconds (100.01 k allocations: 15.651 GB, 2.19% gc time)
>
>
>
> That's why I went on and asked my friend to help me out in the first 
> place. I  think the mnspline is really (not saying it's THE fastest) fast 
> in doing the interpolation itself (magnitudes faster than MATLAB). But then 
> I just don't understand how MATLAB can catch up by just looping through the 
> same operation over and over. Intuitively (maybe I'm wrong) it should be 
> somewhat proportional. If my code in Julia is 10 times faster whitin a 
> loop, and then I just repeat the operation in that particular loop very 
> often, how can it turn out to be only equally fast as MATLAB. Again, the 
> mnspline uses all my threads maybe it has something to do with overhead, 
> whatever. I don't know, hints appreciated. 
>
>

[julia-users] are tasks threads in 0.4?

2016-01-29 Thread andrew cooke

i've been away from julia for a while so am not up-to-date on changes, and 
am looking at an odd problem.

i have some code, which is messier and more complex than i would like, 
which is called to print a graph of values.  the print code uses tasks.  in 
0.3 this works, but in 0.4 the program sits, using no CPU.

if i dump the stack (using gstack PID) i see:

Thread 4 (Thread 0x7efe3b6bb700 (LWP 1709)):
#0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x7efe3bf62b5b in blas_thread_server () from 
/home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
#2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
#3  0x7f004231604d in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7efe3aeba700 (LWP 1710)):
#0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x7efe3bf62b5b in blas_thread_server () from 
/home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
#2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
#3  0x7f004231604d in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7efe3a6b9700 (LWP 1711)):
#0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x7efe3bf62b5b in blas_thread_server () from 
/home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
#2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
#3  0x7f004231604d in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7f0044710740 (LWP 1708)):
#0  0x7f0042e8120d in pause () from /lib64/libpthread.so.0
#1  0x7f0040a190fe in julia_wait_17546 () at task.jl:364
#2  0x7f0040a18ea1 in julia_wait_17544 () at task.jl:286
#3  0x7f0040a40ffc in julia_lock_18599 () at lock.jl:23
#4  0x7efe3ecdbeb7 in ?? ()
#5  0x7ffd3e6ad2c0 in ?? ()
#6  0x in ?? ()

which looks suspiciously like some kind of deadlock.

but i am not using threads, myself.  just tasks.

hence the question.  any pointers appreciated.

thanks,
andrew



[julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-29 Thread Andrew
Your loop has a ton of unnecessary allocation. You can 
move xprime=ones(Nalal,Naa) outside the loop.
Also, you are converting xprime to a vector at every iteration. You can 
also do this outside the loop.

After the changes, I get

julia> include("test2.jl");
WARNING: redefining constant lib
  3.726049 seconds (99.47 k allocations: 15.651 GB, 6.56% gc time)

julia> include("test3.jl");
WARNING: redefining constant lib
  2.352259 seconds (29.40 k allocations: 5.219 GB, 3.29% gc time)

in test3 the performance function is 

function performance(NoIter::Int64)
W_temp = Array(Float64,Nalal*Naa)
W_temp2 = Array(Float64,Nalal,Naa)
xprime=Array(Float64,Nalal,Naa)
xprime=ones(Nalal,Naa)
xprime = xprime[:]
for banana=1:NoIter
W_temp = spl(xprime)
end
W_temp2 = reshape(W_temp,Nalal,Naa)
end

I don't know if you have this in your original code though. Maybe it's just 
your example.

I also have need for fast cubic splines because I do dynamic programming, 
though I don't think Dierckx is a bottleneck for me. I think the 
Interpolations package might soon have cubic splines on a non-uniform grid, 
and it's very fast.


On Friday, January 29, 2016 at 8:02:53 PM UTC-5, pokerho...@googlemail.com 
wrote:
>
> Hi,
>
> my original problem is a dynammic programming problem in which I need to 
> interpolate the value function on an irregular grid using a cubic spline 
> method. I was translating my MATLAB code into Julia and used the Dierckx 
> package in Julia to do the interpolation (there weren't some many 
> alternatives that did spline on an irregular grid as far as I recall). In 
> MATLAB I use interp1. It gave exactly the same result but it was about 50 
> times slower which puzzled me. So I made this (
> http://stackoverflow.com/questions/34766029/julia-vs-matlab-why-is-my-julia-code-so-slow)
>  
> stackoverflow post. 
>
> The post is messy and you don't need to read through it I think. The 
> bottom line was that the Dierckx package apparently calls some Fortran code 
> which seems to pretty old (and slow, and doesn't use multiple cores. Nobody 
> knows what exactly the interp1 is doing. My guess is that it's coded in 
> C?!). 
>
> So I asked a friend of mine who knows a little bit of C and he was so kind 
> to help me out. He translated the interpolation method into C and made it 
> such that it uses multiple threads (I am working with 12 threads). He also 
> put it on github here (https://github.com/nuffe/mnspline). Equipped with 
> that, I went back to my original problem and reran it. The Julia code was 
> still 3 times slower which left me puzzled again. The interpolation itself 
> was much faster now than MATLAB's interp1 but somewhere on the way that 
> advantage was lost. Consider the following minimal working example 
> preserving the irregular grid of the original problem which highlights the 
> point I think (the only action is in the loop, the other stuff is just 
> generating the irregular grid):
>
> MATLAB:
>
> spacing=1.5;
> Nxx = 300 ;
> Naa = 350;
> Nalal = 200; 
> sigma = 10 ;
> NoIter = 1; 
>
> xx=NaN(Nxx,1);
> xmin = 0.01;
> xmax = 400;
> xx(1) = xmin;
> for i=2:Nxx
> xx(i) = xx(i-1) + (xmax-xx(i-1))/((Nxx-i+1)^spacing);
> end
>
> f_util =  @(c) c.^(1-sigma)/(1-sigma);
> W=NaN(Nxx,1);
> W(:,1) = f_util(xx);
>
> W_temp = NaN(Nalal,Naa);
> xprime = NaN(Nalal,Naa);
>
> tic
> for banana=1:NoIter
> %   tic
>   xprime=ones(Nalal,Naa);
>   W_temp=interp1(xx,W(:,1),xprime,'spline');
> %   toc
> end
> toc
>
>
> Julia:
>
> include("mnspline.jl")
>
> spacing=1.5
> Nxx = 300
> Naa = 350
> Nalal = 200
> sigma = 10
> NoIter = 1
>
> xx=Array(Float64,Nxx)
> xmin = 0.01
> xmax = 400
> xx[1] = xmin
> for i=2:Nxx
> xx[i] = xx[i-1] + (xmax-xx[i-1])/((Nxx-i+1)^spacing)
> end
>
> f_util(c) =  c.^(1-sigma)/(1-sigma)
> W=Array(Float64,Nxx,1)
> W[:,1] = f_util(xx)
>
>
> spl = mnspline(xx,W[:,1])
>
> function performance(NoIter::Int64)
> W_temp = Array(Float64,Nalal*Naa)
> W_temp2 = Array(Float64,Nalal,Naa)
> xprime=Array(Float64,Nalal,Naa)
> for banana=1:NoIter
> xprime=ones(Nalal,Naa)
> W_temp = spl(xprime[:])
> end
> W_temp2 = reshape(W_temp,Nalal,Naa)
> end
>
> @time performance()
>
>
> In the end I want to have a matrix, that's why I do all this reshaping in 
> the Julia code. If I comment out the loop and just consider one iteration, 
> Julia does it in  (I ran it several times, precompiling etc)
>
> 0.000565 seconds (13 allocations: 1.603 MB)
>
> MATLAB on the other hand: Elapsed time is 0.007864 seconds.
>
> The bottom line being that in Julia the code is much faster (more than 10 
> times in this case), which should be since I use all my threads and the 
> method is written in C. However, if I don't comment out the loop and run the 
> code as posted above:
>
> Julia:
> 3.205262 seconds (118.99 k allocations: 15.651 GB, 14.08% gc time)
>
> MATLAB:
> Elapsed time is 4.514449 seconds.
>
>
> If I run the whole loop apparent

[julia-users] Re: Immutable type with a function datatype

2016-01-18 Thread Andrew
I also wanted to do what you're doing when I started with Julia. I came 
from Java, so I'm used to the foo.func() syntax.
Here is a good way to do what I think you want with very similar syntax.

immutable Foo
end
func(::Foo, x) = x^2
foo = Foo()
func(foo, 3.)

You can also encapsulate parameters within instances of Foo like this. 
(reload Julia or use workspace() before redefining the type Foo)

immutable Foo
  a::Float64
end
func(foo::Foo, x) = foo.a * x^2
foo = Foo(5.)
func(foo, 3.)


Also, if you want to call the foo object directly, you can do this.

Base.call(foo::Foo, x) = func(foo, x)
foo(3.)

This is useful if you want to treat foo like a function and pass it to an 
optimization routine for example and don't want to use anonymous functions. 
It sometimes doesn't work correctly because foo won't be of type ::Function 
so some methods won't accept it, even though it acts like a function. I 
believe they are working on changing this.




On Monday, January 18, 2016 at 10:08:38 AM UTC-5, Anonymous wrote:
>
> Is the following code considered bad form in Julia?
>
> immutable Foo
> func::Function
> end
>
> foo = Foo(x->x^2)
> foo.func(3)
>
> This mimics the behavior of OOP since just like in OOP the internal method 
> cannot be changed (since the type is immutable).  Sometimes it really does 
> make the most sense to attach a function to an instance of a type, do I 
> take a performance hit doing things this way?
>


[julia-users] BinDeps Sources, when a URL doesn't have an appropriate extension

2015-11-26 Thread Andrew Gibb
Hi,

I'm trying to wrap a C library I've written. I want to use BinDeps to build 
some source. The source is on our internal gitorious git server. A .tar.gz 
of the code is available, but the URL to get that from just ends in the 
branch name. So in my code I have

provides( Sources, 
> URI("https://git0.rd.bbc.co.uk/misc/simpleexr/archive-tarball/master";) , 
> simpleexr)
>

and later

GetSources(simpleexr) 


Which throws an error:  

> LoadError: I don't know how to unpack 
> /home/andrewg/.julia/v0.4/EXRImages/deps/downloads/master
>

Is there a way to insert an intermediate step to change the name of the 
downloaded file from master to something.tar.gz? (Or indeed is there a way 
to use the filename of the file which is returned from the server, which 
does have the correct extension? As I think about, this seems a bit like a 
bug.)

I guess a bit more code context will be useful: 

provides( Sources, 
>> URI("https://git0.rd.bbc.co.uk/misc/simpleexr/archive-tarball/master";) , 
>> simpleexr)
>
>
>> prefix = joinpath(BinDeps.depsdir(simpleexr),"usr")
>
> dldir = joinpath(BinDeps.depsdir(simpleexr),"downloads")
>
> provides(SimpleBuild, 
>
> (@build_steps begin
>
> GetSources(simpleexr)
>
> @build_steps begin
>
> FileRule(joinpath(prefix,"lib","libsimpleexr.so"), 
>> @build_steps begin
>
> `./build.sh`
>
> `cp libsimpleexr.so $prefix/lib`
>
> `cp libsimpleexr.so.1 $prefix/lib`
>
> `cp libsimpleexr.so.1.0 $prefix/lib`
>
> `cp libsimpleexr.h $prefix/include`
>
> end)
>
> end 
>
> end
>
> )
>
> , simpleexr, os = :Linux)
>
>
I guess if Tim Holy sees this he'll suggest I use Images/ImageMagick to 
read an exr image. If there's a way to access the pixel values stored in 
the image before the gamma correction is applied I would gladly use it. The 
last time I looked into it, I couldn't find any mention of this in 
ImageMagick, though. 


[julia-users] Ann: StatefulIterators (streams over iterables)

2015-11-23 Thread andrew cooke

I mentioned this here earlier, but since then okvs improved things hugely 
and it's now only a factor of 2 slower than "normal" iterators, which 
likely makes it useful for anything except critical inner loops.

The basic idea is that you can turn any iterable into something that is 
"consumed" operation by operation (ie a stream).

https://github.com/andrewcooke/StatefulIterators.jl

Examples:

julia> using StatefulIterators

julia> s = StatefulIterator([1,2,3,4,5])
StatefulIterators.StatefulIterator{Array{Int64,1},Int64}([1,2,3,4,5],1)

julia> collect(take(s, 2))
2-element Array{Any,1}:
 1
 2

julia> eltype(s)
Int64

julia> read(s)
3

julia> readbytes(s)
16-element Array{UInt8,1}:
 0x04
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00
 0x05
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00
 0x00




Re: [julia-users] Calling "super" methods

2015-11-22 Thread andrew cooke

ok (i thought there might be something more succinct, but i guess 
multimethods mean it's not simple).  thanks.  andrew

On Sunday, 22 November 2015 11:24:57 UTC-3, Yichao Yu wrote:
>
> > Can I do this, or do I need to pull the function out into a separate 
> name?
>
> invoke
>
> >
> > Thanks,
> > Andrew
> >
>


[julia-users] Calling "super" methods

2015-11-22 Thread andrew cooke

In OO programming it's quite common to delegate work to a supertype from 
within a method.  The syntax might be something like:

class Foo {
method bar(...) {
...
super.bar(...)
...
}
}

I'd like to do the same in Julia, but don't know how to.  For example, I 
may have:

foo(x) = 

which I want to call from within foo(x::Integer), say.

julia> foo(x) = "any"
foo (generic function with 1 method)

julia> foo(x::Integer) = foo(Any(x))
foo (generic function with 2 methods)

julia> foo(1)
ERROR: StackOverflowError:
 in foo at none:1 (repeats 79996 times)

doesn't work.

Can I do this, or do I need to pull the function out into a separate name?

Thanks,
Andrew



[julia-users] Re: Triangular Dispatch, Integerm Range and UniformScaling error

2015-11-22 Thread andrew cooke

a!  ok, thanks.

On Sunday, 22 November 2015 10:40:04 UTC-3, Simon Danisch wrote:
>
> As far as I know, triangular dispatch hasn't hit the shores yet 
> (especially not in 0.4).
> This sort of works, because I is actually a global variable in Base (I::
> UniformScaling{Int64})
> Try some other name, and it will tell you that the variable is undefined, 
> as expected.
>
> Best,
> Simon
>
> Am Sonntag, 22. November 2015 13:38:29 UTC+1 schrieb andrew cooke:
>>
>>
>> Out of my depth here - no idea if this is a bug or me...
>>
>> julia> foo{I<:Integer,U<:UnitRange{I}}(u::U) = 1
>> ERROR: TypeError: UnitRange: in T, expected T<:Real, got UniformScaling{
>> Int64}
>>
>> Version 0.4.1-pre+22 (2015-11-01 00:06 UTC)
>>
>> Thanks, Andrew
>>
>

[julia-users] Triangular Dispatch, Integerm Range and UniformScaling error

2015-11-22 Thread andrew cooke

Out of my depth here - no idea if this is a bug or me...

julia> foo{I<:Integer,U<:UnitRange{I}}(u::U) = 1
ERROR: TypeError: UnitRange: in T, expected T<:Real, got UniformScaling{
Int64}

Version 0.4.1-pre+22 (2015-11-01 00:06 UTC)

Thanks, Andrew


Re: [julia-users] Memory leak for animated plots

2015-11-20 Thread Andrew Keller
Thanks for your response. Patchwork.jl is impressive! This fix almost 
works, but there is some new issue.

When I first execute, I see the following:

Javascript error adding output!
ReferenceError: Patchwork is not defined
See your browser Javascript console for more details.

If I move the slider, the plot appears, but the Javascript error remains. I 
can however confirm that the memory leak is gone.

I've tried removing several .ji files, rebuilding several things, etc. and 
the error seems to persist. Thoughts? I'm running Chrome on Windows 10.


On Friday, November 20, 2015 at 9:48:51 AM UTC-8, Shashi Gowda wrote:
>
> If you install Patchwork.jl <http://github.com/shashi/Patchwork.jl>, and 
> re-run your notebook, it should fix this issue. (you might also need to 
> pre-compile Compose again - can be done by removing the .ji file in 
> ~/.julia/lib/v0.4/)
>
> Compose (which Gadfly uses for rendering to SVG) doesn't depend on 
> Patchwork, but if you have it installed, it will use the Patchwork backend 
> when you render a @manipulate of plots - Patchwork will then try to 
> reconcile previously rendered DOM nodes so that there are no performance 
> penalties of this sort.
>
> On Fri, Nov 20, 2015 at 10:48 PM, Andrew Keller  > wrote:
>
>> I think this is exactly what is happening. Some findings:
>>
>> 1) run the code:
>>
>> using Interact, Gadfly
>>> @manipulate for φ=0:π/16:4π, f=[:sin => sin, :cos => cos]
>>> plot((θ)->f(θ+φ),0,25)
>>> end
>>
>>
>> 2) Chrome dev tools--> Profiles--> heap snapshot.
>> 3) Click sin, cos, sin, cos, sin, cos, ... sin in the notebook
>> 4) Take another heap snapshot and look in comparison view. 
>>
>> It looks like among other things there are a lot of SVGPathElements and 
>> SVGTextElements that belong to detached DOM trees, suggesting the old plots 
>> never get properly disposed. If I instead capture a JS profile in the 
>> Chrome dev tools-->Timeline panel, it appears like the number of nodes and 
>> listeners increases without bound. 
>>
>> Now suppose I use Winston instead of Gadfly. The memory still appears to 
>> leak, although the plots are a little more lightweight and the leak is 
>> slower.
>>
>> Anyway, I'll submit an issue to the jupyter/notebook repo later today 
>> although I wish I could pin down better where exactly the leak is coming 
>> from.
>>
>> On Thursday, November 19, 2015 at 10:35:31 AM UTC-8, Keno Fischer wrote:
>>>
>>> Sounds like the memory leak is on the browser side? Maybe something is 
>>> keeping a javascript reference to the plot? Potentially a Jupyter/IJulia 
>>> bug?
>>>
>>> On Thu, Nov 19, 2015 at 12:01 PM, Stefan Karpinski >> > wrote:
>>>
>>>> This should work – if there's a memory leak that's never reclaimed by 
>>>> gc, that's a bug.
>>>>
>>>> On Thu, Nov 19, 2015 at 11:55 AM, Andrew Keller  
>>>> wrote:
>>>>
>>>>> Maybe generating a new plot every time is not great practice, on 
>>>>> account of the performance hit. That being said, I think it's perfectly 
>>>>> legitimate to do what I'm doing for prototyping purposes. I can achieve 
>>>>> the 
>>>>> frame rate I want and the main example shown on 
>>>>> https://github.com/JuliaLang/Interact.jl does the same thing I do, 
>>>>> generating a new plot each time.
>>>>>
>>>>> In fact, I'd encourage anyone reading this to just try that example, 
>>>>> and repeatedly click between sin and cos. I'm able to make the memory 
>>>>> consumption of my browser grow without bound. Surely someone besides 
>>>>> myself 
>>>>> has noticed this before! I don't think loading another package is a 
>>>>> serious 
>>>>> solution to the problem I'm describing, although your package certainly 
>>>>> looks useful for other purposes.
>>>>>
>>>>> Just to reiterate, this is not a small memory leak; this is like a 
>>>>> memory dam breach. I'm happy to help debug this but some assistance would 
>>>>> be appreciated.
>>>>>
>>>>>
>>>>> On Thursday, November 19, 2015 at 7:11:55 AM UTC-8, Tom Breloff wrote:
>>>>>>
>>>>>> You're creating a new Gadfly.Plot object every update, which is a bad 
>>>>>> idea even if Gadfly's memory management was perfec

Re: [julia-users] Memory leak for animated plots

2015-11-20 Thread Andrew Keller
I think this is exactly what is happening. Some findings:

1) run the code:

using Interact, Gadfly
> @manipulate for φ=0:π/16:4π, f=[:sin => sin, :cos => cos]
> plot((θ)->f(θ+φ),0,25)
> end


2) Chrome dev tools--> Profiles--> heap snapshot.
3) Click sin, cos, sin, cos, sin, cos, ... sin in the notebook
4) Take another heap snapshot and look in comparison view. 

It looks like among other things there are a lot of SVGPathElements and 
SVGTextElements that belong to detached DOM trees, suggesting the old plots 
never get properly disposed. If I instead capture a JS profile in the 
Chrome dev tools-->Timeline panel, it appears like the number of nodes and 
listeners increases without bound. 

Now suppose I use Winston instead of Gadfly. The memory still appears to 
leak, although the plots are a little more lightweight and the leak is 
slower.

Anyway, I'll submit an issue to the jupyter/notebook repo later today 
although I wish I could pin down better where exactly the leak is coming 
from.

On Thursday, November 19, 2015 at 10:35:31 AM UTC-8, Keno Fischer wrote:
>
> Sounds like the memory leak is on the browser side? Maybe something is 
> keeping a javascript reference to the plot? Potentially a Jupyter/IJulia 
> bug?
>
> On Thu, Nov 19, 2015 at 12:01 PM, Stefan Karpinski  > wrote:
>
>> This should work – if there's a memory leak that's never reclaimed by gc, 
>> that's a bug.
>>
>> On Thu, Nov 19, 2015 at 11:55 AM, Andrew Keller > > wrote:
>>
>>> Maybe generating a new plot every time is not great practice, on account 
>>> of the performance hit. That being said, I think it's perfectly legitimate 
>>> to do what I'm doing for prototyping purposes. I can achieve the frame rate 
>>> I want and the main example shown on 
>>> https://github.com/JuliaLang/Interact.jl does the same thing I do, 
>>> generating a new plot each time.
>>>
>>> In fact, I'd encourage anyone reading this to just try that example, and 
>>> repeatedly click between sin and cos. I'm able to make the memory 
>>> consumption of my browser grow without bound. Surely someone besides myself 
>>> has noticed this before! I don't think loading another package is a serious 
>>> solution to the problem I'm describing, although your package certainly 
>>> looks useful for other purposes.
>>>
>>> Just to reiterate, this is not a small memory leak; this is like a 
>>> memory dam breach. I'm happy to help debug this but some assistance would 
>>> be appreciated.
>>>
>>>
>>> On Thursday, November 19, 2015 at 7:11:55 AM UTC-8, Tom Breloff wrote:
>>>>
>>>> You're creating a new Gadfly.Plot object every update, which is a bad 
>>>> idea even if Gadfly's memory management was perfect. Plots.jl gives you 
>>>> the 
>>>> ability to add to or replace the underlying data like this:
>>>>
>>>> using Plots
>>>> gadfly()
>>>> getxy() = (1:10, rand(10))
>>>> plt = plot(x, y)
>>>>
>>>> # overwrite underlying plot data without building a new plot
>>>> plt[1] = getxy()
>>>>
>>>>
>>>> You can also use familiar push! and append! calls. 
>>>>
>>>> Let me know if this helps, and please post issues if you find bugs. Of 
>>>> course the memory issue could be while redisplaying in IJulia, in which 
>>>> case this method won't help. 
>>>>
>>>> On Thursday, November 19, 2015, Andrew Keller  
>>>> wrote:
>>>>
>>>>> I'd like to use Interact to have a plot that updates frequently in a 
>>>>> Jupyter notebook, but it seems like there is a large memory leak 
>>>>> somewhere 
>>>>> and I am having some trouble tracking down what package is responsible. 
>>>>> Within a few minutes of running, the following code will cause the memory 
>>>>> used by the web browser to balloon to well over 1 GB with no sign of 
>>>>> slowing down. It is almost like the memory allocated for displaying a 
>>>>> particular plot is never deallocated:
>>>>>
>>>>> using Reactive, Interact, Gadfly
>>>>>
>>>>> @manipulate for 
>>>>>> paused=false,
>>>>>> dt = fpswhen(lift(!, paused), 10)
>>>>>> plot(x=collect(1:10),y=rand(10))
>>>>>> end
>>>>>
>>>>>
>>>>> I can observe this problem using Julia 0.4.1, together with the most 
>>>>> recent releases of all relevant packages, in either Safari on OS X or 
>>>>> Chrome on Windows 10.
>>>>>
>>>>> Here's hoping someone has an idea of what's going on or advice for how 
>>>>> to track down this problem. It seems like something that many others 
>>>>> should 
>>>>> be experiencing.
>>>>>
>>>>> Thanks,
>>>>> Andrew
>>>>>
>>>>
>>
>

Re: [julia-users] Memory leak for animated plots

2015-11-19 Thread Andrew Keller
Maybe generating a new plot every time is not great practice, on account of 
the performance hit. That being said, I think it's perfectly legitimate to 
do what I'm doing for prototyping purposes. I can achieve the frame rate I 
want and the main example shown on https://github.com/JuliaLang/Interact.jl 
does the same thing I do, generating a new plot each time.

In fact, I'd encourage anyone reading this to just try that example, and 
repeatedly click between sin and cos. I'm able to make the memory 
consumption of my browser grow without bound. Surely someone besides myself 
has noticed this before! I don't think loading another package is a serious 
solution to the problem I'm describing, although your package certainly 
looks useful for other purposes.

Just to reiterate, this is not a small memory leak; this is like a memory 
dam breach. I'm happy to help debug this but some assistance would be 
appreciated.


On Thursday, November 19, 2015 at 7:11:55 AM UTC-8, Tom Breloff wrote:
>
> You're creating a new Gadfly.Plot object every update, which is a bad idea 
> even if Gadfly's memory management was perfect. Plots.jl gives you the 
> ability to add to or replace the underlying data like this:
>
> using Plots
> gadfly()
> getxy() = (1:10, rand(10))
> plt = plot(x, y)
>
> # overwrite underlying plot data without building a new plot
> plt[1] = getxy()
>
>
> You can also use familiar push! and append! calls. 
>
> Let me know if this helps, and please post issues if you find bugs. Of 
> course the memory issue could be while redisplaying in IJulia, in which 
> case this method won't help. 
>
> On Thursday, November 19, 2015, Andrew Keller  > wrote:
>
>> I'd like to use Interact to have a plot that updates frequently in a 
>> Jupyter notebook, but it seems like there is a large memory leak somewhere 
>> and I am having some trouble tracking down what package is responsible. 
>> Within a few minutes of running, the following code will cause the memory 
>> used by the web browser to balloon to well over 1 GB with no sign of 
>> slowing down. It is almost like the memory allocated for displaying a 
>> particular plot is never deallocated:
>>
>> using Reactive, Interact, Gadfly
>>
>> @manipulate for 
>>> paused=false,
>>> dt = fpswhen(lift(!, paused), 10)
>>> plot(x=collect(1:10),y=rand(10))
>>> end
>>
>>
>> I can observe this problem using Julia 0.4.1, together with the most 
>> recent releases of all relevant packages, in either Safari on OS X or 
>> Chrome on Windows 10.
>>
>> Here's hoping someone has an idea of what's going on or advice for how to 
>> track down this problem. It seems like something that many others should be 
>> experiencing.
>>
>> Thanks,
>> Andrew
>>
>

[julia-users] Memory leak for animated plots

2015-11-19 Thread Andrew Keller
I'd like to use Interact to have a plot that updates frequently in a 
Jupyter notebook, but it seems like there is a large memory leak somewhere 
and I am having some trouble tracking down what package is responsible. 
Within a few minutes of running, the following code will cause the memory 
used by the web browser to balloon to well over 1 GB with no sign of 
slowing down. It is almost like the memory allocated for displaying a 
particular plot is never deallocated:

using Reactive, Interact, Gadfly

@manipulate for 
> paused=false,
> dt = fpswhen(lift(!, paused), 10)
> plot(x=collect(1:10),y=rand(10))
> end


I can observe this problem using Julia 0.4.1, together with the most recent 
releases of all relevant packages, in either Safari on OS X or Chrome on 
Windows 10.

Here's hoping someone has an idea of what's going on or advice for how to 
track down this problem. It seems like something that many others should be 
experiencing.

Thanks,
Andrew


Re: [julia-users] macro to exclude code depending on VERSION

2015-11-12 Thread andrew cooke

ah, great.  i won't make a new package then.  thanks.

On Thursday, 12 November 2015 09:30:21 UTC-3, Yichao Yu wrote:
>
> https://github.com/JuliaLang/julia/issues/7449 
> https://github.com/JuliaLang/Compat.jl/pull/131 
> https://github.com/JuliaLang/julia/issues/5892 
>
> On Thu, Nov 12, 2015 at 7:23 AM, andrew cooke  > wrote: 
> > 
> > when you're writing code that uses macros, supporting different versions 
> of 
> > julia seems to be more complex than normal.  in particular, things like: 
> > 
> > if VERSION > XX 
> > # code with macros here 
> > end 
> > 
> > don't work as expected, because macro expansion occurs before runtime 
> > evaluation.  so the macros are expenaded whatever version. 
> > 
> > given that, i have found this simple macro to be useful; 
> > 
> > macro cond(test, block) 
> > if eval(test) 
> > block 
> > end 
> > end 
> > 
> > @cond VERSION >= v"0.4" begin 
> >  # code with macros here 
> > end 
> > 
> > anyway, my questions are: (1) is the above sensible and (2) does this 
> > already exist? 
> > 
> > thanks, 
> > andrew 
> > 
>


[julia-users] macro to exclude code depending on VERSION

2015-11-12 Thread andrew cooke

when you're writing code that uses macros, supporting different versions of 
julia seems to be more complex than normal.  in particular, things like:

if VERSION > XX
# code with macros here
end

don't work as expected, because macro expansion occurs before runtime 
evaluation.  so the macros are expenaded whatever version.

given that, i have found this simple macro to be useful;

macro cond(test, block)
if eval(test)
block
end
end

@cond VERSION >= v"0.4" begin
 # code with macros here
end

anyway, my questions are: (1) is the above sensible and (2) does this 
already exist?

thanks,
andrew



Re: [julia-users] Arrays as streams / consuming data with take et al

2015-11-09 Thread andrew cooke

yeah, that's the problem with types and iters.

this is why i had to add read() to StatefulIterators.jl

it seems to me that the problem is related to the lack of a typed generic 
container type.  but i guess it must be more complex than that.

andrew


On Monday, 9 November 2015 17:58:39 UTC-3, Dan wrote:
>
> Hmmm... maybe there is an issue with the following:
>   | | |_| | | | (_| |  |  Version 0.5.0-dev+1137 (2015-11-04 03:36 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Commit 95b7080 (5 days old master)
> |__/   |  x86_64-linux-gnu
>
>
> julia> collect(1:3)
> 3-element Array{Int64,1}:
>  1
>  2
>  3
>
>
> julia> collect(rest(1:3,start(1:3)))
> 3-element Array{Any,1}:
>  1
>  2
>  3
>
>
> Shouldn't the type of both arrays be the same? (the latter defined in 
> non-global context still yield Any Array).
>
> On Monday, November 9, 2015 at 10:45:12 PM UTC+2, Dan wrote:
>>
>> the example with `pull` before, traverses the iterator's beginning 
>> twice... what one probably wants is:
>>
>> julia> function pull(itr,n::Int)
>>state = start(itr)
>>head = eltype(itr)[]
>>while n>0 && !done(itr,state)
>>val,state = next(itr,state)
>>push!(head,val)
>>n-=1
>>end
>>(head,rest(itr,state))
>>end
>> pull (generic function with 2 methods)
>>
>>
>> julia> head,tail = pull([1,2,3,4,5],3)
>> ([1,2,3],Base.Rest{Array{Int64,1},Int64}([1,2,3,4,5],4))
>>
>>
>> julia> collect(tail)
>> 2-element Array{Any,1}:
>>  4
>>  5
>>
>>
>> note the first call already pulls the first 3 elements and collects them 
>> into an array (one can't get to the next elements without first reading the 
>> head.
>>
>> On Monday, November 9, 2015 at 10:39:48 PM UTC+2, andrew cooke wrote:
>>>
>>>
>>> oh that's interesting.  this is from 
>>> https://github.com/JuliaLang/Iterators.jl i guess.
>>>
>>> it doesn't support read though (which i didn't realise i needed when i 
>>> first asked).
>>>
>>> i'll add a warning to StatefulIterators pointing people to this.
>>>
>>> thanks,
>>> andrew
>>>
>>> On Monday, 9 November 2015 17:07:52 UTC-3, Dan wrote:
>>>>
>>>> XXX in your questions = chain.
>>>> Or more clearly:
>>>> julia> stream = chain([1,2,3,4,5])
>>>> Iterators.Chain(Any[[1,2,3,4,5]])
>>>>
>>>> julia> collect(take(stream, 3))
>>>> 3-element Array{Any,1}:
>>>>  1
>>>>  2
>>>>  3
>>>>
>>>>
>>>> On Monday, November 9, 2015 at 7:47:51 PM UTC+2, andrew cooke wrote:
>>>>>
>>>>>
>>>>> hmmm.  maybe i'm doing it wrong as that only gives a factor of 2 
>>>>> speedup.
>>>>>
>>>>> anyway, it's all i need for now, i may return to this later.
>>>>>
>>>>> thanks again,
>>>>> andrew
>>>>>
>>>>> On Monday, 9 November 2015 14:11:55 UTC-3, andrew cooke wrote:
>>>>>>
>>>>>>
>>>>>> yes, i'm about to do it for arrays (i don't care about performance 
>>>>>> right now, but i want to implement read with type conversion and so need 
>>>>>> the types).
>>>>>>
>>>>>> On Monday, 9 November 2015 11:20:47 UTC-3, Yichao Yu wrote:
>>>>>>>
>>>>>>> On Mon, Nov 9, 2015 at 8:04 AM, andrew cooke  
>>>>>>> wrote: 
>>>>>>> > 
>>>>>>> > https://github.com/andrewcooke/StatefulIterators.jl 
>>>>>>>
>>>>>>> FYI, one way to make this more efficient is to parametrize the 
>>>>>>> iterator. You could easily do this for Array's. In the more general 
>>>>>>> case, you needs type inference to get the type right for a 
>>>>>>> non-type-stable iterator (iterator with a type unstable index...) 
>>>>>>> but 
>>>>>>> it's generally a bad idea to write code that calls type inference 
>>>>>>> directly. 
>>>>>>>
>>>>>>> > 
>>>>>>> > 
>>>>>>> > On Monday, 9 November 2015 06:24:14 UTC-3, andrew cooke wrote: 
>

Re: [julia-users] Arrays as streams / consuming data with take et al

2015-11-09 Thread andrew cooke

oh, ok :o(

On Monday, 9 November 2015 17:36:13 UTC-3, Dan wrote:
>
> ouch... my suggestion takes care of the first output, but the second 
> output repeats the start of the sequence. but `chain` is a useful method to 
> convert an array to an iterable. anyway, I've concocted a method to 
> generate the desired behavior:
>
> julia> function pull(itr,n)
>state = start(itr)
>for i=1:n state = next(itr,state)[2] ; end
>(take(itr,n),rest(itr,state))
>end
> pull (generic function with 1 method)
>
>
> julia> stream = 1:5
> 1:5
>
>
> julia> head, tail = pull(stream,3)
> (Base.Take{UnitRange{Int64}}(1:5,3),Base.Rest{UnitRange{Int64},Int64}(1:5,
> 4))
>
>
> julia> collect(head)
> 3-element Array{Int64,1}:
>  1
>  2
>  3
>
>
> julia> collect(tail)
> 2-element Array{Any,1}:
>  4
>  5
>
> the idea is to use the defined `pull` function to generate the head and 
> tail iterators. this must be so, since the state of the iterators after the 
> first few elements must be remembered somewhere.
>
> On Monday, November 9, 2015 at 10:07:52 PM UTC+2, Dan wrote:
>>
>> XXX in your questions = chain.
>> Or more clearly:
>> julia> stream = chain([1,2,3,4,5])
>> Iterators.Chain(Any[[1,2,3,4,5]])
>>
>> julia> collect(take(stream, 3))
>> 3-element Array{Any,1}:
>>  1
>>  2
>>  3
>>
>>
>> On Monday, November 9, 2015 at 7:47:51 PM UTC+2, andrew cooke wrote:
>>>
>>>
>>> hmmm.  maybe i'm doing it wrong as that only gives a factor of 2 speedup.
>>>
>>> anyway, it's all i need for now, i may return to this later.
>>>
>>> thanks again,
>>> andrew
>>>
>>> On Monday, 9 November 2015 14:11:55 UTC-3, andrew cooke wrote:
>>>>
>>>>
>>>> yes, i'm about to do it for arrays (i don't care about performance 
>>>> right now, but i want to implement read with type conversion and so need 
>>>> the types).
>>>>
>>>> On Monday, 9 November 2015 11:20:47 UTC-3, Yichao Yu wrote:
>>>>>
>>>>> On Mon, Nov 9, 2015 at 8:04 AM, andrew cooke  
>>>>> wrote: 
>>>>> > 
>>>>> > https://github.com/andrewcooke/StatefulIterators.jl 
>>>>>
>>>>> FYI, one way to make this more efficient is to parametrize the 
>>>>> iterator. You could easily do this for Array's. In the more general 
>>>>> case, you needs type inference to get the type right for a 
>>>>> non-type-stable iterator (iterator with a type unstable index...) but 
>>>>> it's generally a bad idea to write code that calls type inference 
>>>>> directly. 
>>>>>
>>>>> > 
>>>>> > 
>>>>> > On Monday, 9 November 2015 06:24:14 UTC-3, andrew cooke wrote: 
>>>>> >> 
>>>>> >> thanks! 
>>>>> >> 
>>>>> >> On Sunday, 8 November 2015 22:40:53 UTC-3, Yichao Yu wrote: 
>>>>> >>> 
>>>>> >>> On Sun, Nov 8, 2015 at 8:11 PM, andrew cooke  
>>>>> wrote: 
>>>>> >>> > I'd like to be able to use take() and all the other iterator 
>>>>> tools with 
>>>>> >>> > a 
>>>>> >>> > stream of data backed by an array (or string). 
>>>>> >>> > 
>>>>> >>> > By that I mean I'd like to be able to do something like: 
>>>>> >>> > 
>>>>> >>> >> stream = XXX([1,2,3,4,5]) 
>>>>> >>> >> collect(take(stream, 3)) 
>>>>> >>> > [1,2,3] 
>>>>> >>> >> collect(take(stream, 2)) 
>>>>> >>> > [4,5] 
>>>>> >>> > 
>>>>> >>> > Is this possible?  I can find heavyweight looking streams for 
>>>>> IO, and I 
>>>>> >>> > can 
>>>>> >>> > find lightweight iterables without state.  But I can't seem to 
>>>>> find the 
>>>>> >>> > particular mix described above. 
>>>>> >>> 
>>>>> >>> Jeff's conclusion @ JuliaCon is that it seems impossible to 
>>>>> implement 
>>>>> >>> this (stateful iterator) currently in a generic and performant way 
>>>>> so 
>>>>> >>> I doubt you will find it in a generic iterator library (that works 
>>>>> not 
>>>>> >>> only on arrays). A version that works only on Arrays should be 
>>>>> simple 
>>>>> >>> enough to implement and doesn't sound useful enough to be in an 
>>>>> >>> exported API so I guess you probably should just implement your 
>>>>> own. 
>>>>> >>> 
>>>>> >>> Ref 
>>>>> >>> 
>>>>> https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/iterator/julia-users/t4ZieI2_iwI/3NTw1k406qkJ
>>>>>  
>>>>> >>> 
>>>>> >>> > 
>>>>> >>> > (I think I can see how to write it myself; I'm asking if it 
>>>>> already 
>>>>> >>> > exists - 
>>>>> >>> > seems like it should, but I can't find the right words to search 
>>>>> for). 
>>>>> >>> > 
>>>>> >>> > Thanks, 
>>>>> >>> > Andrew 
>>>>> >>> > 
>>>>>
>>>>

Re: [julia-users] Arrays as streams / consuming data with take et al

2015-11-09 Thread andrew cooke

oh that's interesting.  this is from 
https://github.com/JuliaLang/Iterators.jl i guess.

it doesn't support read though (which i didn't realise i needed when i 
first asked).

i'll add a warning to StatefulIterators pointing people to this.

thanks,
andrew

On Monday, 9 November 2015 17:07:52 UTC-3, Dan wrote:
>
> XXX in your questions = chain.
> Or more clearly:
> julia> stream = chain([1,2,3,4,5])
> Iterators.Chain(Any[[1,2,3,4,5]])
>
> julia> collect(take(stream, 3))
> 3-element Array{Any,1}:
>  1
>  2
>  3
>
>
> On Monday, November 9, 2015 at 7:47:51 PM UTC+2, andrew cooke wrote:
>>
>>
>> hmmm.  maybe i'm doing it wrong as that only gives a factor of 2 speedup.
>>
>> anyway, it's all i need for now, i may return to this later.
>>
>> thanks again,
>> andrew
>>
>> On Monday, 9 November 2015 14:11:55 UTC-3, andrew cooke wrote:
>>>
>>>
>>> yes, i'm about to do it for arrays (i don't care about performance right 
>>> now, but i want to implement read with type conversion and so need the 
>>> types).
>>>
>>> On Monday, 9 November 2015 11:20:47 UTC-3, Yichao Yu wrote:
>>>>
>>>> On Mon, Nov 9, 2015 at 8:04 AM, andrew cooke  
>>>> wrote: 
>>>> > 
>>>> > https://github.com/andrewcooke/StatefulIterators.jl 
>>>>
>>>> FYI, one way to make this more efficient is to parametrize the 
>>>> iterator. You could easily do this for Array's. In the more general 
>>>> case, you needs type inference to get the type right for a 
>>>> non-type-stable iterator (iterator with a type unstable index...) but 
>>>> it's generally a bad idea to write code that calls type inference 
>>>> directly. 
>>>>
>>>> > 
>>>> > 
>>>> > On Monday, 9 November 2015 06:24:14 UTC-3, andrew cooke wrote: 
>>>> >> 
>>>> >> thanks! 
>>>> >> 
>>>> >> On Sunday, 8 November 2015 22:40:53 UTC-3, Yichao Yu wrote: 
>>>> >>> 
>>>> >>> On Sun, Nov 8, 2015 at 8:11 PM, andrew cooke  
>>>> wrote: 
>>>> >>> > I'd like to be able to use take() and all the other iterator 
>>>> tools with 
>>>> >>> > a 
>>>> >>> > stream of data backed by an array (or string). 
>>>> >>> > 
>>>> >>> > By that I mean I'd like to be able to do something like: 
>>>> >>> > 
>>>> >>> >> stream = XXX([1,2,3,4,5]) 
>>>> >>> >> collect(take(stream, 3)) 
>>>> >>> > [1,2,3] 
>>>> >>> >> collect(take(stream, 2)) 
>>>> >>> > [4,5] 
>>>> >>> > 
>>>> >>> > Is this possible?  I can find heavyweight looking streams for IO, 
>>>> and I 
>>>> >>> > can 
>>>> >>> > find lightweight iterables without state.  But I can't seem to 
>>>> find the 
>>>> >>> > particular mix described above. 
>>>> >>> 
>>>> >>> Jeff's conclusion @ JuliaCon is that it seems impossible to 
>>>> implement 
>>>> >>> this (stateful iterator) currently in a generic and performant way 
>>>> so 
>>>> >>> I doubt you will find it in a generic iterator library (that works 
>>>> not 
>>>> >>> only on arrays). A version that works only on Arrays should be 
>>>> simple 
>>>> >>> enough to implement and doesn't sound useful enough to be in an 
>>>> >>> exported API so I guess you probably should just implement your 
>>>> own. 
>>>> >>> 
>>>> >>> Ref 
>>>> >>> 
>>>> https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/iterator/julia-users/t4ZieI2_iwI/3NTw1k406qkJ
>>>>  
>>>> >>> 
>>>> >>> > 
>>>> >>> > (I think I can see how to write it myself; I'm asking if it 
>>>> already 
>>>> >>> > exists - 
>>>> >>> > seems like it should, but I can't find the right words to search 
>>>> for). 
>>>> >>> > 
>>>> >>> > Thanks, 
>>>> >>> > Andrew 
>>>> >>> > 
>>>>
>>>

Re: [julia-users] Arrays as streams / consuming data with take et al

2015-11-09 Thread andrew cooke

hmmm.  maybe i'm doing it wrong as that only gives a factor of 2 speedup.

anyway, it's all i need for now, i may return to this later.

thanks again,
andrew

On Monday, 9 November 2015 14:11:55 UTC-3, andrew cooke wrote:
>
>
> yes, i'm about to do it for arrays (i don't care about performance right 
> now, but i want to implement read with type conversion and so need the 
> types).
>
> On Monday, 9 November 2015 11:20:47 UTC-3, Yichao Yu wrote:
>>
>> On Mon, Nov 9, 2015 at 8:04 AM, andrew cooke  wrote: 
>> > 
>> > https://github.com/andrewcooke/StatefulIterators.jl 
>>
>> FYI, one way to make this more efficient is to parametrize the 
>> iterator. You could easily do this for Array's. In the more general 
>> case, you needs type inference to get the type right for a 
>> non-type-stable iterator (iterator with a type unstable index...) but 
>> it's generally a bad idea to write code that calls type inference 
>> directly. 
>>
>> > 
>> > 
>> > On Monday, 9 November 2015 06:24:14 UTC-3, andrew cooke wrote: 
>> >> 
>> >> thanks! 
>> >> 
>> >> On Sunday, 8 November 2015 22:40:53 UTC-3, Yichao Yu wrote: 
>> >>> 
>> >>> On Sun, Nov 8, 2015 at 8:11 PM, andrew cooke  
>> wrote: 
>> >>> > I'd like to be able to use take() and all the other iterator tools 
>> with 
>> >>> > a 
>> >>> > stream of data backed by an array (or string). 
>> >>> > 
>> >>> > By that I mean I'd like to be able to do something like: 
>> >>> > 
>> >>> >> stream = XXX([1,2,3,4,5]) 
>> >>> >> collect(take(stream, 3)) 
>> >>> > [1,2,3] 
>> >>> >> collect(take(stream, 2)) 
>> >>> > [4,5] 
>> >>> > 
>> >>> > Is this possible?  I can find heavyweight looking streams for IO, 
>> and I 
>> >>> > can 
>> >>> > find lightweight iterables without state.  But I can't seem to find 
>> the 
>> >>> > particular mix described above. 
>> >>> 
>> >>> Jeff's conclusion @ JuliaCon is that it seems impossible to implement 
>> >>> this (stateful iterator) currently in a generic and performant way so 
>> >>> I doubt you will find it in a generic iterator library (that works 
>> not 
>> >>> only on arrays). A version that works only on Arrays should be simple 
>> >>> enough to implement and doesn't sound useful enough to be in an 
>> >>> exported API so I guess you probably should just implement your own. 
>> >>> 
>> >>> Ref 
>> >>> 
>> https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/iterator/julia-users/t4ZieI2_iwI/3NTw1k406qkJ
>>  
>> >>> 
>> >>> > 
>> >>> > (I think I can see how to write it myself; I'm asking if it already 
>> >>> > exists - 
>> >>> > seems like it should, but I can't find the right words to search 
>> for). 
>> >>> > 
>> >>> > Thanks, 
>> >>> > Andrew 
>> >>> > 
>>
>

Re: [julia-users] Arrays as streams / consuming data with take et al

2015-11-09 Thread andrew cooke

yes, i'm about to do it for arrays (i don't care about performance right 
now, but i want to implement read with type conversion and so need the 
types).

On Monday, 9 November 2015 11:20:47 UTC-3, Yichao Yu wrote:
>
> On Mon, Nov 9, 2015 at 8:04 AM, andrew cooke  > wrote: 
> > 
> > https://github.com/andrewcooke/StatefulIterators.jl 
>
> FYI, one way to make this more efficient is to parametrize the 
> iterator. You could easily do this for Array's. In the more general 
> case, you needs type inference to get the type right for a 
> non-type-stable iterator (iterator with a type unstable index...) but 
> it's generally a bad idea to write code that calls type inference 
> directly. 
>
> > 
> > 
> > On Monday, 9 November 2015 06:24:14 UTC-3, andrew cooke wrote: 
> >> 
> >> thanks! 
> >> 
> >> On Sunday, 8 November 2015 22:40:53 UTC-3, Yichao Yu wrote: 
> >>> 
> >>> On Sun, Nov 8, 2015 at 8:11 PM, andrew cooke  
> wrote: 
> >>> > I'd like to be able to use take() and all the other iterator tools 
> with 
> >>> > a 
> >>> > stream of data backed by an array (or string). 
> >>> > 
> >>> > By that I mean I'd like to be able to do something like: 
> >>> > 
> >>> >> stream = XXX([1,2,3,4,5]) 
> >>> >> collect(take(stream, 3)) 
> >>> > [1,2,3] 
> >>> >> collect(take(stream, 2)) 
> >>> > [4,5] 
> >>> > 
> >>> > Is this possible?  I can find heavyweight looking streams for IO, 
> and I 
> >>> > can 
> >>> > find lightweight iterables without state.  But I can't seem to find 
> the 
> >>> > particular mix described above. 
> >>> 
> >>> Jeff's conclusion @ JuliaCon is that it seems impossible to implement 
> >>> this (stateful iterator) currently in a generic and performant way so 
> >>> I doubt you will find it in a generic iterator library (that works not 
> >>> only on arrays). A version that works only on Arrays should be simple 
> >>> enough to implement and doesn't sound useful enough to be in an 
> >>> exported API so I guess you probably should just implement your own. 
> >>> 
> >>> Ref 
> >>> 
> https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/iterator/julia-users/t4ZieI2_iwI/3NTw1k406qkJ
>  
> >>> 
> >>> > 
> >>> > (I think I can see how to write it myself; I'm asking if it already 
> >>> > exists - 
> >>> > seems like it should, but I can't find the right words to search 
> for). 
> >>> > 
> >>> > Thanks, 
> >>> > Andrew 
> >>> > 
>


Re: [julia-users] Arrays as streams / consuming data with take et al

2015-11-09 Thread andrew cooke

https://github.com/andrewcooke/StatefulIterators.jl

On Monday, 9 November 2015 06:24:14 UTC-3, andrew cooke wrote:
>
> thanks!
>
> On Sunday, 8 November 2015 22:40:53 UTC-3, Yichao Yu wrote:
>>
>> On Sun, Nov 8, 2015 at 8:11 PM, andrew cooke  wrote: 
>> > I'd like to be able to use take() and all the other iterator tools with 
>> a 
>> > stream of data backed by an array (or string). 
>> > 
>> > By that I mean I'd like to be able to do something like: 
>> > 
>> >> stream = XXX([1,2,3,4,5]) 
>> >> collect(take(stream, 3)) 
>> > [1,2,3] 
>> >> collect(take(stream, 2)) 
>> > [4,5] 
>> > 
>> > Is this possible?  I can find heavyweight looking streams for IO, and I 
>> can 
>> > find lightweight iterables without state.  But I can't seem to find the 
>> > particular mix described above. 
>>
>> Jeff's conclusion @ JuliaCon is that it seems impossible to implement 
>> this (stateful iterator) currently in a generic and performant way so 
>> I doubt you will find it in a generic iterator library (that works not 
>> only on arrays). A version that works only on Arrays should be simple 
>> enough to implement and doesn't sound useful enough to be in an 
>> exported API so I guess you probably should just implement your own. 
>>
>> Ref 
>> https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/iterator/julia-users/t4ZieI2_iwI/3NTw1k406qkJ
>>  
>>
>> > 
>> > (I think I can see how to write it myself; I'm asking if it already 
>> exists - 
>> > seems like it should, but I can't find the right words to search for). 
>> > 
>> > Thanks, 
>> > Andrew 
>> > 
>>
>

Re: [julia-users] Arrays as streams / consuming data with take et al

2015-11-09 Thread andrew cooke
thanks!

On Sunday, 8 November 2015 22:40:53 UTC-3, Yichao Yu wrote:
>
> On Sun, Nov 8, 2015 at 8:11 PM, andrew cooke  > wrote: 
> > I'd like to be able to use take() and all the other iterator tools with 
> a 
> > stream of data backed by an array (or string). 
> > 
> > By that I mean I'd like to be able to do something like: 
> > 
> >> stream = XXX([1,2,3,4,5]) 
> >> collect(take(stream, 3)) 
> > [1,2,3] 
> >> collect(take(stream, 2)) 
> > [4,5] 
> > 
> > Is this possible?  I can find heavyweight looking streams for IO, and I 
> can 
> > find lightweight iterables without state.  But I can't seem to find the 
> > particular mix described above. 
>
> Jeff's conclusion @ JuliaCon is that it seems impossible to implement 
> this (stateful iterator) currently in a generic and performant way so 
> I doubt you will find it in a generic iterator library (that works not 
> only on arrays). A version that works only on Arrays should be simple 
> enough to implement and doesn't sound useful enough to be in an 
> exported API so I guess you probably should just implement your own. 
>
> Ref 
> https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/iterator/julia-users/t4ZieI2_iwI/3NTw1k406qkJ
>  
>
> > 
> > (I think I can see how to write it myself; I'm asking if it already 
> exists - 
> > seems like it should, but I can't find the right words to search for). 
> > 
> > Thanks, 
> > Andrew 
> > 
>


[julia-users] Arrays as streams / consuming data with take et al

2015-11-08 Thread andrew cooke
I'd like to be able to use take() and all the other iterator tools with a 
stream of data backed by an array (or string).

By that I mean I'd like to be able to do something like:

> stream = XXX([1,2,3,4,5])
> collect(take(stream, 3))
[1,2,3]
> collect(take(stream, 2))
[4,5]

Is this possible?  I can find heavyweight looking streams for IO, and I can 
find lightweight iterables without state.  But I can't seem to find the 
particular mix described above.

(I think I can see how to write it myself; I'm asking if it already exists 
- seems like it should, but I can't find the right words to search for).

Thanks,
Andrew



Re: [julia-users] Parameterisation affects copy?!

2015-11-07 Thread andrew cooke
On Saturday, 7 November 2015 11:13:17 UTC-3, Yichao Yu wrote:
>
> On Sat, Nov 7, 2015 at 9:06 AM, andrew cooke  > wrote: 
> > 
> > This is just blowing my mind.  Can anyone explain it? 
> > 
> > To be clear - adding the {N} means that the "copy constructor" no longer 
> > copies.  See how mutating Foo alters both "instanced", and they both 
> have 
> > the same object reference. 
> > 
>
> You only defined inner constructor for Foo and the "copy" method you 
> defined is actually a method to call `Foo{N}` (more specifically 
> `call{N}(::Type{Foo{N}}, x::Foo)`) 
>
> When you call `Foo(f)`, you are calling `Foo` and not `Foo{2}` (i.e. 
> `call(::Type{Foo}, x::Foo{2})`) and it falls back to the no-op type 
> conversion. Use `which` or `@which` to figure out what you are 
> calling. 
>

i'm about to go about, but i'll check that later - sounds reasonable.  
thanks! 


[julia-users] Parameterisation affects copy?!

2015-11-07 Thread andrew cooke

This is just blowing my mind.  Can anyone explain it?

To be clear - adding the {N} means that the "copy constructor" no longer 
copies.  See how mutating Foo alters both "instanced", and they both have 
the same object reference.


andrew@laptop:~/.julia/v0.4/GoCL> julia-0.4
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.1-pre+22 (2015-11-01 00:06 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 669222e (6 days old release-0.4)
|__/   |  x86_64-suse-linux

julia> immutable Foo{N}
 a
 Foo() = new(zeros(UInt8, N)) 
 Foo(x::Foo) = new(copy(x.a)) 
   end

julia> f = Foo{2}()
Foo{2}(UInt8[0x00,0x00])

julia> f2 = Foo(f)
Foo{2}(UInt8[0x00,0x00])

julia> immutable Bar
 a
 Bar() = new(zeros(UInt8, 2)) 
 Bar(x::Bar) = new(copy(x.a)) 
   end

julia> b = Bar()
Bar(UInt8[0x00,0x00])

julia> b2 = Bar(b)
Bar(UInt8[0x00,0x00])

julia> for x in (f, f2, b, b2)
  println(x)
  println(pointer_from_objref(x))
   end
Foo{2}(UInt8[0x00,0x00])
Ptr{Void} @0x7f5f10d6eb20
Foo{2}(UInt8[0x00,0x00])
Ptr{Void} @0x7f5f10d6eb20
Bar(UInt8[0x00,0x00])
Ptr{Void} @0x7f5f1031e530
Bar(UInt8[0x00,0x00])
Ptr{Void} @0x7f5f10e85f80

julia> f.a[1] = 7
7

julia> f2
Foo{2}(UInt8[0x07,0x00])

julia> b.a[1] = 7
7

julia> b2
Bar(UInt8[0x00,0x00])





[julia-users] Re: unpleasant docstring / macro parsing error

2015-11-06 Thread andrew cooke
https://github.com/JuliaLang/julia/issues/13905

On Friday, 6 November 2015 17:04:14 UTC-3, andrew cooke wrote:
>
>
> is this known?  am i doing something dumb?
>
> if not, i'll create an issue.
>
> andrew@laptop:/tmp> cat nasty.jl 
>
> """this is a doc string"""
> function myfunc()
> @doesnotexist begin
> end
> end
>
> andrew@laptop:/tmp> julia-0.4 nasty.jl 
> ERROR: LoadError: invalid doc expression:
>
> function myfunc()
> $(Expr(:line, 4, symbol("/tmp/nasty.jl")))
> @doesnotexist begin 
> end
> end
>  in include at ./boot.jl:261
>  in include_from_node1 at ./loading.jl:304
>  in process_options at ./client.jl:308
>  in _start at ./client.jl:411
> while loading /tmp/nasty.jl, in expression starting on line 7
>
> in short, the missing macro ends up giving a doc expression error.  it's 
> not so bad above, where it's obvious, but it's frustratingly misleading 
> when you've got a larger function.
>
> with
>
>
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.4.1-pre+22 (2015-11-01 00:06 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Commit 669222e (5 days old release-0.4)
> |__/   |  x86_64-suse-linux
>
> andrew
>


[julia-users] unpleasant docstring / macro parsing error

2015-11-06 Thread andrew cooke

is this known?  am i doing something dumb?

if not, i'll create an issue.

andrew@laptop:/tmp> cat nasty.jl 

"""this is a doc string"""
function myfunc()
@doesnotexist begin
end
end

andrew@laptop:/tmp> julia-0.4 nasty.jl 
ERROR: LoadError: invalid doc expression:

function myfunc()
$(Expr(:line, 4, symbol("/tmp/nasty.jl")))
@doesnotexist begin 
end
end
 in include at ./boot.jl:261
 in include_from_node1 at ./loading.jl:304
 in process_options at ./client.jl:308
 in _start at ./client.jl:411
while loading /tmp/nasty.jl, in expression starting on line 7

in short, the missing macro ends up giving a doc expression error.  it's 
not so bad above, where it's obvious, but it's frustratingly misleading 
when you've got a larger function.

with


   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.4.1-pre+22 (2015-11-01 00:06 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 669222e (5 days old release-0.4)
|__/   |  x86_64-suse-linux

andrew


[julia-users] Re: Confused about doc strings and macros

2015-10-31 Thread andrew cooke

hi, thanks.

you're absolutely right - the error is there (i just wrote a long message 
saying it wasn't, and then had to delete it, because it plainly is - i am 
not sure how i missed it).

i'll do what you suggest.

thanks again,
andrew


On Saturday, 31 October 2015 11:19:33 UTC-3, Michael Hatherly wrote:
>
> For user-defined macros you need to make use of Base.@__doc__ to mark the 
> subexpression returned by your macro that should be documented. In this 
> case it’s probably the Expr(:type, ...) that the docstring should be 
> attached to rather than the subsequent method definitions.
>
> So replacing the line 
> https://github.com/andrewcooke/AutoHashEquals.jl/blob/e3e80dfb190a8f8932fcce1cbdc6e4bcf79ea520/src/AutoHashEquals.jl#L81
>  
> with
> Base.@__doc__($(esc(typ))) 
>
> should be enough to allow documenting code generated via @auto_hash_equals. 
>
>
> There should have been an error message that points to the docs for 
> @__doc__, is that not showing up?
>
> It is available in v0.4 AFAIK.
>
> — Mike
>
> On Saturday, 31 October 2015 15:53:39 UTC+2, andrew cooke wrote:
>>
>>
>> I want to use the (relatively?) new docstrings, but have data structures 
>> that look like:
>>
>> """This is MyType."""
>> @auto_hash_equals type MyType
>>attribute::Something
>> end
>>
>> where the macro comes from 
>> https://github.com/andrewcooke/AutoTypeParameters.jl
>>
>> unfortunately, this gives the error: LoadError: invalid doc expression 
>> (it seems that the docstring is being "applied" to the macro call, not the 
>> data type).
>>
>> and it's not clear to me what the best solution is.  should i modify my 
>> macros so that they can take a string, and then move the macro to before 
>> the string?  or is there something i can do that will make docstrings 
>> understand that it should be applied after the macro?
>>
>> thanks,
>> andrew
>>
>>
>>

[julia-users] Re: Confused about doc strings and macros

2015-10-31 Thread andrew cooke

gah, sorry.  linked to the wrong macro package.  i don't think this is 
particularly relevant, but it should have been 
https://github.com/andrewcooke/AutoHashEquals.jl

On Saturday, 31 October 2015 10:53:39 UTC-3, andrew cooke wrote:
>
>
> I want to use the (relatively?) new docstrings, but have data structures 
> that look like:
>
> """This is MyType."""
> @auto_hash_equals type MyType
>attribute::Something
> end
>
> where the macro comes from 
> https://github.com/andrewcooke/AutoTypeParameters.jl
>
> unfortunately, this gives the error: LoadError: invalid doc expression (it 
> seems that the docstring is being "applied" to the macro call, not the data 
> type).
>
> and it's not clear to me what the best solution is.  should i modify my 
> macros so that they can take a string, and then move the macro to before 
> the string?  or is there something i can do that will make docstrings 
> understand that it should be applied after the macro?
>
> thanks,
> andrew
>
>
>

[julia-users] Confused about doc strings and macros

2015-10-31 Thread andrew cooke

I want to use the (relatively?) new docstrings, but have data structures 
that look like:

"""This is MyType."""
@auto_hash_equals type MyType
   attribute::Something
end

where the macro comes from 
https://github.com/andrewcooke/AutoTypeParameters.jl

unfortunately, this gives the error: LoadError: invalid doc expression (it 
seems that the docstring is being "applied" to the macro call, not the data 
type).

and it's not clear to me what the best solution is.  should i modify my 
macros so that they can take a string, and then move the macro to before 
the string?  or is there something i can do that will make docstrings 
understand that it should be applied after the macro?

thanks,
andrew




Re: [julia-users] Re: Hausdorff distance

2015-10-26 Thread Andrew McLean
This is not exactly my area of expertise, but this recently published paper
looks like a good starting point.

[1]
A. A. Taha and A. Hanbury, ‘An Efficient Algorithm for Calculating the
Exact Hausdorff Distance’, *IEEE Transactions on Pattern Analysis and
Machine Intelligence*, vol. 37, no. 11, pp. 2153–2163, Nov. 2015.

http://dx.doi.org/10.1109/TPAMI.2015.2408351


On 26 October 2015 at 14:22, Júlio Hoffimann 
wrote:

> Hi Andrew,
>
> Could you please point to a good paper? This naive implementation I showed
> is working surprisingly well for me though.
>
> -Júlio
>


[julia-users] Re: Hausdorff distance

2015-10-26 Thread Andrew McLean
The naïve algorithm is O(N^2), I think the most efficient algorithms come 
close to O(N). There is quite a lot of literature on efficient algorithms.

- Andrew

On Friday, 23 October 2015 23:28:46 UTC+1, Júlio Hoffimann wrote:

> Hi,
>
> I want to make the Hausdorff distance (
> https://en.wikipedia.org/wiki/Hausdorff_distance) available in Julia, is 
> the Distances.jl package a good fit or I should create a separate package 
> just for this distance between point sets?
>
> I can think of a very simple (naive) implementation:
>
> using Distances
>
> A, B # matrices which columns represent the points in the pointset
> D = pairwise(Euclidean(), A, B)
> daB = maximum(minimum(D,2))
> dbA = maximum(minimum(D,1))
> result = max(daB, dbA)
>
> -Júlio
>


Re: [julia-users] DataFrame type specification

2015-10-26 Thread Andrew Gibb
That worked. Thanks. 


Re: [julia-users] Re: Code runs 500 times slower under 0.4.0

2015-10-23 Thread Andrew
I don't think performance is the only thing Julia has going for it over 
Matlab. Julia has multiple dispatch, a sophisticated type system, macros, 
and functions can modify arrays and other mutable objects. I'm unaware of 
any plan for Matlab to add these things--they would be major changes and 
possibly very confusing for long-time users.

On Friday, October 23, 2015 at 7:18:43 AM UTC-4, Kris De Meyer wrote:
>
> On a slightly different note, in 2 or 3 release cycles, Matlab will have 
> caught up on any performance gains Julia may have introduced (by using the 
> same LLVM compiler procedures Julia uses) and then the only thing Julia 
> will have going for it is that it's free. But my cost to my employers is 
> such that if I lose as little as 3 days a year on compatibility issues, 
> they would be better off paying for a Matlab license...
>
> Best.
>
> Kris
>
>
>
>
>
>  
>
> On Thursday, October 22, 2015 at 10:58:30 PM UTC+1, Stefan Karpinski wrote:
>>
>> You can try using @code_warntype to see if there are type instabilities.
>>
>> On Thu, Oct 22, 2015 at 5:50 PM, Gunnar Farnebäck  
>> wrote:
>>
>>> If you don't have deprecation warnings I would suspect some change in 
>>> 0.4 has introduced type instabilities. If you are using typed 
>>> concatenations you could be hit by 
>>> https://github.com/JuliaLang/julia/issues/13254.
>>>
>>>
>>> Den torsdag 22 oktober 2015 kl. 23:03:00 UTC+2 skrev Kris De Meyer:

 Are there any general style guidelines for moving code from 0.3.11 to 
 0.4.0? Running the unit and functionality tests for a module that I 
 developed under 0.3.11 in 0.4, I experience a 500 times slowdown of blocks 
 of code that I time with @time. 

 Can't even imagine where I have to start looking, and find it 
 flabbergasting that perfectly valid julia code under 0.3.11 (not 
 generating 
 a single warning) can show such a performance degradation under 0.4.0.

 Anyone seen anything similar? Is there some fundamental difference in 
 how code is JIT-compiled under 0.4.0?

 Thanks,

 Kris




>>

[julia-users] DataFrame type specification

2015-10-22 Thread Andrew Gibb
I have a csv with some data. One of the columns is something I'd always 
like to be read as a string, although sometimes the value will just be 
numerals. Is there some way to specify that I want this column to be an 
AbstractString in readtable? Or perhaps some way to convert that column 
from (say) integers to strings once loaded?

Thanks

Andy


[julia-users] Why can I redefine some types and not others?

2015-10-15 Thread Andrew
I've got code with many user defined types. I am aware that you're not 
supposed to be able to redefine types and that there is a workaround where 
you wrap all your code in a module, but for ease of use I'm trying not to 
do that right now. I'm finding that sometimes I can reload my coat without 
error, but there is a particular type that can't be redefined. Example:

In file "firm.jl" I have a type

immutable FirmParams
  kappa::Float64
  alpha::Float64
  gamma::Float64
  beta::Float64
end

and I can run

julia> include("firm.jl")

without error provided I haven't modified FirmParams. If I change 
FirmParams it won't work, but I expected that.

In file "aux.jl" I have some types

abstract AggState
abstract IdioState

immutable AggState1 <: AggState
  w::Float64
  Y::Float64
end

immutable IdioState1 <: IdioState
  D::Float64 
end

immutable State{T1<:IdioState, T2<:AggState}
  is::T1
  as::T2
end


and if I run it, I always get

julia> include("aux.jl")
ERROR: LoadError: invalid redefinition of constant State
 in include at ./boot.jl:261
 in include_from_node1 at ./loading.jl:304
while loading /home/andrew/Documents/julia/ABK/aux.jl, in expression 
starting on line 17


The main difference I can see is that State is a parametric type. Is there 
some mechanism in the background that allows you to reload unmodified type 
declarations, but doesn't work for parametric types?








Re: [julia-users] Modules and namespaces

2015-10-11 Thread Andrew Keller
Thank you for the helpful advice. In this particular case, I can indeed 
just do what you suggest and call @eval at the top level in my module in a 
for loop. It would be useful to know explicitly why it is considered poor 
form to define types inside a function; I don't think it is clear from the 
follow-up link, though I found it helpful for other reasons. I'm using the 
function for my own internal purposes and not exporting it.

Elsewhere in my code, I define a function to create other *functions*—not 
types—based on a few parameters. My module has several includes, and in 
each included file it makes sense to create functions based on some 
parameters in a Dict. So in this case, I like using the function to create 
other functions, because I can just call it in each included file. Is this 
considered bad style? If so, what is an alternative that is comparably 
concise?

On Sunday, October 11, 2015 at 3:04:25 PM UTC-7, Isaiah wrote:
>
> You are calling `symbol` on an object, which results in a fully-qualified 
> name when called inside a module:
>
> julia> module Foo
>abstract a
>f() = symbol(a)
>end
>
> julia> Foo.f()
>
> symbol("Foo.a")
>
>
> (or try adding `@show superSymb` inside your function)
>
> Creating a symbol from a type instance here isn't really necessary because 
> you can splice in `$supertype` directly. (see the "Metaprogramming" section 
> of the manual)
>
> Having said that: calling a function to create a type is not 
> recommended/idiomatic. Instead, you could call `@eval` at the top level in 
> your module (possibly in a for loop). There are a handful of examples of 
> this in base, for example in "linalg/triangular.jl".
>
>
>
>
> On Sun, Oct 11, 2015 at 12:01 PM, Andrew Keller  > wrote:
>
>> I'm using Julia 0.4.0 on Mac OS X 10.10.5. I'd like to put some code into 
>> a module, but I'm having some trouble with namespaces. The following fails 
>> (`UndefVarError: test.a not defined`) when enclosed inside `module test`. 
>> When outside the module, e.g. pasted into the REPL, the code works fine. 
>> Could someone point me to relevant reading material or explain what is 
>> going on?  It seems I can avoid the problem by putting the string "a" in 
>> the dictionary instead of the abstract type, but I want to know why I am 
>> unable to do things as written. Thank you for your patience as I am new to 
>> the language.
>>
>> module test 
>>
>> abstract a 
>>
>> dict = Dict("key" => a) 
>>
>> function createType(typeName::ASCIIString,supertype::DataType)
>>
>> typeSymb = symbol(typeName)
>> superSymb = symbol(supertype)
>> @eval immutable ($typeSymb){T} <: $superSymb
>>
>> num::Float64
>>
>> end
>>
>> end
>>
>> createType("b",dict["key"])
>>
>> end
>>
>
>

[julia-users] Modules and namespaces

2015-10-11 Thread Andrew Keller
I'm using Julia 0.4.0 on Mac OS X 10.10.5. I'd like to put some code into a 
module, but I'm having some trouble with namespaces. The following fails 
(`UndefVarError: test.a not defined`) when enclosed inside `module test`. 
When outside the module, e.g. pasted into the REPL, the code works fine. 
Could someone point me to relevant reading material or explain what is 
going on?  It seems I can avoid the problem by putting the string "a" in 
the dictionary instead of the abstract type, but I want to know why I am 
unable to do things as written. Thank you for your patience as I am new to 
the language.

module test 

abstract a 

dict = Dict("key" => a) 

function createType(typeName::ASCIIString,supertype::DataType)

typeSymb = symbol(typeName)
superSymb = symbol(supertype)
@eval immutable ($typeSymb){T} <: $superSymb

num::Float64

end

end

createType("b",dict["key"])

end


[julia-users] Creating custom METADATA

2015-09-22 Thread Andrew Gibb
I'm trying to follow the steps in the 0.4 Manual to create a custom 
METADATA.jl so that I can distribute packages around my organisation. If I 
clone JuliaLang/METADATA.jl.git into a local folder, then call Pkg.init() 
on that folder, I get the following:

INFO: Initializing package repository /Users/andrewg/.julia/v0.4
INFO: Package directory /Users/andrewg/.julia/v0.4 is already initialized.

And no META_BRANCH is created. Can anyone suggest what I'm doing wrong?

Thanks

Andy


[julia-users] Updating a single output line in IJulia

2015-09-14 Thread Andrew Gibb
Hi,

I have some loops I'd like to observe/debug by seeing the parameters 
change. In the REPL I can do
print(" $p1 $p2 \r")
flush(STDOUT)
and have my parameters updating on a line.

This doesn't seem to work in IJulia. When I try it, there is no output. I 
guess this is because this isn't STDOUT, or something like that.

Does anyone know of a nice way to do this?

Thanks

Andy


[julia-users] Re: Pango Font calls - guidance on coding with an eye to adding to Cairo.jl

2015-09-14 Thread andrew cooke

Drawing is partly an experiment in how to make an interface.  Luxor 
provides (and always will provide) more functionality because it is closer 
to Cairo.  I am not sure Drawing will even expose arbitrary coordinate 
transforms (never mind clipping etc).

Drawing is partly for me to make "art".  I've experimented quite a bit with 
graphics over the years (I once wrote a pure Haskell "functional images" 
library based on Conal Eliott's Pan) and in the end I've found that what I 
really want is to not something that is mathematically elegant (like 
Compose.jl which is a higher level Cairo wrapper), but something that is 
simple and provides a direct, imperative interface.  The maths stuff I can 
do in an arbitrary programming language - I just want the plotting to get 
out of my way and work.

Some of the best computer-related art is done with Processing, which is 
similarly simple (I think).

So Drawing is an experiment in the kind of simple UI that I would like to 
draw with.

Having said all that, I seem to have been sucked into Pango and text 
formatting the last few days, just so I can carry a similar interface 
across into text.

Andrew

On Monday, 14 September 2015 11:21:16 UTC-3, cormu...@mac.com wrote:
>
> Drawing.jl looks more sophisticated -- I might switch to it myself :)  I 
> suppose the experts use Cairo or Compose directly, so the focus of these 
> type of packages should be on ease of use/simplicity...



[julia-users] Re: Pango Font calls - guidance on coding with an eye to adding to Cairo.jl

2015-09-14 Thread andrew cooke


On Monday, 14 September 2015 09:21:51 UTC-3, Andreas Lobinger wrote:
>
> I think the \040 is an octal SPACE (hex:20, dec: 32) and might be handled 
> differently when in text/labels.
>

yes 040 is octal for 32 which is ASCII space, but if you look at the 
examples then you'll see that some text has spaces.

i suspect it's some issue with lower level font config on my computer tbh.
 

> btw: Are you aware of Luxor.jl (https://github.com/cormullion/Luxor.jl)? 
>

yes. 


[julia-users] Re: Julian way of tracking a "global" module setting

2015-09-13 Thread andrew cooke

although it's not clear that's much better than

if USE_A
using A
else
using B
end

really...

On Sunday, 13 September 2015 21:59:37 UTC-3, andrew cooke wrote:
>
>
> the following works..
>
> first, defining module AB:
>
> module A
> export foo
> foo() = println("a")
> end
>
> module B
> export foo
> foo() = println("b")
> end
>
> module AB
> if Main.USE_A
> using A
> else
> using B
> end
> export foo
> end
>
> that depends on Main.USE_A, where Main is the initial module when things 
> start up,  so then you can do:
>
> USE_A = true
>
> include("ab.jl")
>
> using AB
> foo()
>
> which prints "a".
>
> no idea if this is considered kosher...
>
> andrew
>
>
>
>
> On Sunday, 13 September 2015 21:45:57 UTC-3, andrew cooke wrote:
>>
>>
>> i don't know of a good way to do this.
>>
>> really, parameterised modules would be great.
>>
>> the simplest thing i can think of is if modules are first class and using 
>> etal can take an expression, but the following doesn't run:
>>
>> module A
>> foo() = println("a")
>> end
>>
>> module B
>> foo() = println("b")
>> end
>>
>> module AB
>> m(x) = x ? A : B
>> end
>>
>> using AB
>> using m(true)
>> foo()  # wouldn't it be nice if this printed "a"?
>>
>> andrew
>>
>>
>> On Sunday, 13 September 2015 19:33:16 UTC-3, Seth wrote:
>>>
>>> Hi all,
>>>
>>> I'd like to track a setting throughout my module (that will cause the 
>>> [transparent] dispatch of either single-threaded or parallel versions of 
>>> many different functions). Is there a more Julian way of doing the 
>>> following? This seems inelegant:
>>>
>>> _parallel = false# start off without parallelism - user calls 
>>> parallelize() to set/unset.
>>>
>>> function parallelize(p::Bool=true)
>>> global _parallel = p
>>> end
>>>
>>>
>>> function foo(a::Int) # there will be many functions like this
>>>   if _parallel
>>>  _foo_parallel(a)
>>>   else
>>> _foo_singlethread(a)
>>>   end
>>> end
>>>
>>>

[julia-users] Re: Julian way of tracking a "global" module setting

2015-09-13 Thread andrew cooke

ah, no it wouldn't, sorry.

On Sunday, 13 September 2015 22:18:24 UTC-3, Seth wrote:
>
> This looks interesting but I'd want to give the user the option of turning 
> parallelization on and off at will. Not sure this will do it.
>
> On Sunday, September 13, 2015 at 5:59:37 PM UTC-7, andrew cooke wrote:
>>
>>
>> the following works..
>>
>> first, defining module AB:
>>
>> module A
>> export foo
>> foo() = println("a")
>> end
>>
>> module B
>> export foo
>> foo() = println("b")
>> end
>>
>> module AB
>> if Main.USE_A
>> using A
>> else
>> using B
>> end
>> export foo
>> end
>>
>> that depends on Main.USE_A, where Main is the initial module when things 
>> start up,  so then you can do:
>>
>> USE_A = true
>>
>> include("ab.jl")
>>
>> using AB
>> foo()
>>
>> which prints "a".
>>
>> no idea if this is considered kosher...
>>
>> andrew
>>
>>
>>
>>
>> On Sunday, 13 September 2015 21:45:57 UTC-3, andrew cooke wrote:
>>>
>>>
>>> i don't know of a good way to do this.
>>>
>>> really, parameterised modules would be great.
>>>
>>> the simplest thing i can think of is if modules are first class and 
>>> using etal can take an expression, but the following doesn't run:
>>>
>>> module A
>>> foo() = println("a")
>>> end
>>>
>>> module B
>>> foo() = println("b")
>>> end
>>>
>>> module AB
>>> m(x) = x ? A : B
>>> end
>>>
>>> using AB
>>> using m(true)
>>> foo()  # wouldn't it be nice if this printed "a"?
>>>
>>> andrew
>>>
>>>
>>> On Sunday, 13 September 2015 19:33:16 UTC-3, Seth wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I'd like to track a setting throughout my module (that will cause the 
>>>> [transparent] dispatch of either single-threaded or parallel versions of 
>>>> many different functions). Is there a more Julian way of doing the 
>>>> following? This seems inelegant:
>>>>
>>>> _parallel = false# start off without parallelism - user calls 
>>>> parallelize() to set/unset.
>>>>
>>>> function parallelize(p::Bool=true)
>>>> global _parallel = p
>>>> end
>>>>
>>>>
>>>> function foo(a::Int) # there will be many functions like this
>>>>   if _parallel
>>>>  _foo_parallel(a)
>>>>   else
>>>> _foo_singlethread(a)
>>>>   end
>>>> end
>>>>
>>>>

  1   2   3   4   5   6   7   8   >