[julia-users] Re: Add documentation for overrides of Base functions?

2015-09-20 Thread Sheehan Olver
Hmm, I guess google groups doesn't support markdown.  Here's a possibly 
easier version to read:

I tried adding the following

 julia
 doc"""
 length(foo) -> Integer

 returns the length of foo
 """
 Base.length(foo::Foo)


inside a module, but it doesn't show up in 

 ?length


Any suggestions?  And is the 

doc""" ..."""...

 syntax documented somewhere?


On Sunday, September 20, 2015 at 4:17:32 PM UTC+10, Sheehan Olver wrote:
>
> I tried adding the following
>
> ```julia
> doc"""
> length(foo) -> Integer
>
> returns the length of foo
> """
> Base.length(foo::Foo)
> ```
>
> inside a module, but it doesn't show up in 
>
> ```julia
> ?length
> ```
>
> Any suggestions?  And is the `doc""" ..."""...` syntax documented 
> somewhere?
>
>
>
>
>
>

[julia-users] Re: Pkg.build("IJulia") failed with rc2

2015-09-20 Thread Martijn Visser
I had the same thing. I believe it would be solved by merging 
https://github.com/amitmurthy/LibExpat.jl/pull/32

-Martijn

On Sunday, September 20, 2015 at 10:15:35 AM UTC+2, Chris Stook wrote:
>
> After updating to rc2, Pkg.build("IJulia") resulted in an infinite cycle 
> of errors.  After watching the errors for a a few minutes I hit ctrl-C a 
> few times to stop.  I then tried to build WinRPM.  This also resulted is a 
> cycle of errors.  Next I quit julia, deleted the v0.4 directory, restarted 
> julia and Pkg.add("IJulia").  This also resulted in a cycle of errors.
>
> Now jupiter will not run for Julia 0.3.11 or 0.4.0-rc2.  It does still 
> work with Python 3.
>
> I'm going to go back to rc1 and see if the problem goes away.
>
> -Chris
>
>

[julia-users] Re: What does the `|>` operator do? (possibly a Gtk.jl question)

2015-09-20 Thread Steven Sagaert
Think of it as unix pipes.  F# uses the exact same notation and in fact in 
F# the |> notation is now more prevalent than "regular" function 
application notation because if read left to right instead right to left.

You could also think of it a one special case of  the monadic  (oops! I 
said the m-word) >>= operator in Haskell.


On Sunday, September 20, 2015 at 11:08:59 AM UTC+2, Daniel Carrera wrote:
>
> Looking at the code examples from Gtk.jl I found this code example:
>
> w = Gtk.@Window() |>
> (f = Gtk.@Box(:h) |>
> (b = Gtk.@Button("1")) |>
> (c = Gtk.@Button("2")) |>
> (f2 = Gtk.@Box(:v) |>
>   Gtk.@Label("3") |>
>   Gtk.@Label("4"))) |>
> showall
>
>
> This is just a compact way to create a Gtk window and put some objects in 
> it. But I had never seen that `|>` operator before, and I can't figure out 
> what it's doing. Is this operator somehow unique to Gtk.jl ? It can see 
> that they use it to nest widgets inside containers, but it's not clear to 
> me how it does it.
>
> Cheers,
> Daniel.
>


Re: [julia-users] Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Adam
Thanks for the comments! Based on both your comments, I've made the 
following changes:

   1. I eliminated the dictionary structure (I only did this as a matter of 
   organization). There are no longer any dictionaries, and those "parameters" 
   and "distributions" are all now passed directly as function arguments. 
   2. I eliminated the type declarations from my function arguments (since 
   there are no longer any dictionaries).  
   3. I surrounded the majority of the code with a "run_sim()" function. 
   Now, it looks like this:

include("dist_o_i.jl")
include("calc_net.jl")
include("update_w.jl")
include("set_up_sim.jl")
include("simulation.jl")

function run_sim( )
  include("set_up.jl")
  v1 = [-3 0.5 0.5]

  term_condition = false
  iter = 0

  while !term_condition

v2 = [-1.5 0 1.5 0]

s_array, w_array, b_array = simulation( v1, v2, D, T, I_val, max_b,
  w_max, gamma_val, controls, Prob_o, signals )

iter = iter + 1

if iter >= 1
  term_condition = true
end

  end
end

@time run_sim()


Unfortunately, these adjustments have not changed the performance. Running 
twice, I got:

elapsed time: 39.365116756 seconds (20316825420 bytes allocated, 37.92% gc 
time)
elapsed time: 39.856962087 seconds (20239718784 bytes allocated, 38.50% gc 
time)

Spencer: Re: your point about running "simulation.jl" before timing 
it...yes. That's the reason I provided multiple "elapsed time" results, to 
show that I'm running the function a few times and getting the same 
performance. With my latest alterations, I'm running "@time run_sim()" 
multiple times, and seeing (roughly) the same performance each time. 

Any other ideas? Is there something about the matrix or vector 
multiplication that could be slowing it down? Maybe it has to do with the 
arrays in which I'm storing the data? 



On Saturday, September 19, 2015 at 1:18:54 PM UTC-5, Spencer Russell wrote:
>
> Hi Adam,
>
> Welome to Julia! 
>
> Just to check - are you calling your `simulation(…)` function before you 
> time it? Otherwise your timing will include the time it takes to compile 
> the function, which happens the first time it’s called with a given set of 
> argument types.
>
> Your `param` dict is of type (Any, Any), which means that the compiler 
> doesn’t know anything about the type of objects inside it. You’re doing a 
> lot of access into the dict in your inner loops, which causes similar 
> issues to accessing global variables. It will probably speed things up if 
> you declare it at `params{String, Float64}()` instead (or whatever the type 
> of the parameters is), or give the parameters as arguments to your 
> simulation function (probably best).
>
> -s
>
> On Sep 19, 2015, at 12:44 PM, Adam  
> wrote:
>
> Hi, I'm a novice to Julia but have heard promising things and wanted to 
> see if the language can help with a problem I'm working on. I have some 
> Matlab code with some user-defined functions that runs a simulation in 
> about ~1.4 seconds in Matlab (for a certain set of parameters that 
> characterize a small problem instance). I translated the code into Julia, 
> and to my surprise the Julia code runs 5x to 30x slower than the Matlab 
> code. I'll be running this code on much larger problem instances many, many 
> times (within some other loops), so performance is important here. 
>
> I created a GitHub gist that contains a stripped-down version of the Julia 
> code that gets as close to (as I can find) the culprit of the problem. The 
> gist is here: https://gist.github.com/anonymous/010bcbda091381b0de9e. A 
> quick description: 
>
>- set_up.jl sets up parameters and other items.
>- set_up_sim.jl sets up items particular to the simulation.
>- simulation.jl runs the simulation.
>- calc_net.jl, dist_o_i.jl, and update_w.jl are user-defined functions 
>executed in the simulation. 
>
>
> On my laptop (running in Juno with Julia version 0.3.10), this code yields:
> elapsed time: 43.269609577 seconds (20297989440 bytes allocated, 38.77% gc 
> time)
> elapsed time: 38.500054653 seconds (20291872804 bytes allocated, 40.41% gc 
> time)
> elapsed time: 40.238907235 seconds (20291869252 bytes allocated, 39.44% gc 
> time)
>
> Why is so much memory used, and why is so much time spent in garbage 
> collection?
>
> I'm familiar with 
> http://docs.julialang.org/en/release-0.3/manual/performance-tips/ and 
> have tried to follow these tips to the best of my knowledge. One example of 
> what might be seen as low-hanging fruit: I tried removing the type 
> declarations from my functions, but this actually increased the run-time of 
> the code by a few seconds. Also, other permutations of the column orders 
> pertaining to D, T, and I led to slower performance. 
>
> I'm sure there are several issues at play here-- I'm just using Julia for 
> the first time. Any tips would be greatly appreciated. 
>
>
>

Re: [julia-users] planned array changes

2015-09-20 Thread Viral Shah
Its not clear what to do with sparse matrices and views. The major change 
will be the introduction of sparse vectors. We want to generally make the 
sparse matrix framework flexible enough so that there can be other 
implementations outside of Base.

-viral

On Friday, September 18, 2015 at 8:46:27 PM UTC+5:30, Seth wrote:
>
> Are there similar plans to revamp sparse matrices? (I'd like to start 
> getting informed as early as possible).
>
> On Friday, September 18, 2015 at 2:14:53 AM UTC-7, Tim Holy wrote:
>>
>> On Thursday, September 17, 2015 11:55:56 PM harven wrote: 
>> > I see that there are many changes scheduled for arrays in the next 
>> release. 
>> > Can someone summarize what is planned? 
>>
>> https://github.com/JuliaLang/julia/issues/13157 
>>
>> > I understand that [,] will become non concatenating. What will be the 
>> type 
>> > of expressions such as 
>> > 
>> >   ["julia", [1, 1.0]] 
>> > 
>> > Any,  union{AbstractString, Array{Float64}}? 
>>
>> Presumably an Arrary{Any,1}, though Array{Union{ASCIIString, 
>> Array{Float64,1}}, 1} is also a possibility. 
>>
>> > 
>> > Will the following code return an error or an array of some type? 
>> > 
>> >   push!(["one", [1, 5.1]], 1.5) 
>>
>> Depends on the above 
>>
>> > Is there some syntactic sugar planned for Any arrays, in the spirit of 
>> {}? 
>>
>> Almost certainly not. Braces are in short supply, Any[] is easy. There is 
>> little enthusiasm for burning diamonds to heat the house :-). 
>>
>> --Tim 
>>
>>

Re: [julia-users] Re: Add documentation for overrides of Base functions?

2015-09-20 Thread Sheehan Olver
Ah I’m still on rc1, I’ll updated.




> On 20 Sep 2015, at 5:51 pm, Michael Hatherly  
> wrote:
> 
> Any suggestions?
> 
> What version are you on? I think I’ve already fixed this one and it was 
> backported in RC2. It’s working for me in 0.4.0-rc2 and on master.
> 
> And is the doc""" ..."""... syntax documented somewhere?
> 
> I recently added a section that should cover everything that’s available, 
> here http://docs.julialang.org/en/latest/manual/documentation/#syntax-guide 
> .
> 
> — Mike
> 
> 
> On Sunday, 20 September 2015 08:17:32 UTC+2, Sheehan Olver wrote:
> I tried adding the following
> 
> ```julia
> doc"""
> length(foo) -> Integer
> 
> returns the length of foo
> """
> Base.length(foo::Foo)
> ```
> 
> inside a module, but it doesn't show up in 
> 
> ```julia
> ?length
> ```
> 
> Any suggestions?  And is the `doc""" ..."""...` syntax documented somewhere?
> 
> 
> 
> 
> 



[julia-users] Re: Pkg.build("IJulia") failed with rc2

2015-09-20 Thread Chris Stook
Went back to rc1.  IJulia is OK now.

-Chris

On Sunday, September 20, 2015 at 4:15:35 AM UTC-4, Chris Stook wrote:
>
> After updating to rc2, Pkg.build("IJulia") resulted in an infinite cycle 
> of errors.  After watching the errors for a a few minutes I hit ctrl-C a 
> few times to stop.  I then tried to build WinRPM.  This also resulted is a 
> cycle of errors.  Next I quit julia, deleted the v0.4 directory, restarted 
> julia and Pkg.add("IJulia").  This also resulted in a cycle of errors.
>
> Now jupiter will not run for Julia 0.3.11 or 0.4.0-rc2.  It does still 
> work with Python 3.
>
> I'm going to go back to rc1 and see if the problem goes away.
>
> -Chris
>
>

[julia-users] Re: What does the `|>` operator do? (possibly a Gtk.jl question)

2015-09-20 Thread STAR0SS
>From the help:

help?> |>
search: |>

..  |>(x, f)

Applies a function to the preceding argument. This allows for easy function 
chaining.

.. doctest::

julia> [1:5;] |> x->x.^2 |> sum |> inv
0.01818181818181818

The implementation is quite simple:

|>(x, f) = f(x)

https://github.com/JuliaLang/julia/blob/master/base/operators.jl#L198


[julia-users] Re: Pkg.build("IJulia") failed with rc2

2015-09-20 Thread Martijn Visser
Turns out #32 was not enough, the warnings were about the Union() syntax 
that changed to Union{}, and this was not included in #32.
So I created this PR, which solved these problems for me:
https://github.com/amitmurthy/LibExpat.jl/pull/37

On Sunday, September 20, 2015 at 10:20:58 AM UTC+2, Martijn Visser wrote:
>
> I had the same thing. I believe it would be solved by merging 
> https://github.com/amitmurthy/LibExpat.jl/pull/32
>
> -Martijn
>
> On Sunday, September 20, 2015 at 10:15:35 AM UTC+2, Chris Stook wrote:
>>
>> After updating to rc2, Pkg.build("IJulia") resulted in an infinite cycle 
>> of errors.  After watching the errors for a a few minutes I hit ctrl-C a 
>> few times to stop.  I then tried to build WinRPM.  This also resulted is a 
>> cycle of errors.  Next I quit julia, deleted the v0.4 directory, restarted 
>> julia and Pkg.add("IJulia").  This also resulted in a cycle of errors.
>>
>> Now jupiter will not run for Julia 0.3.11 or 0.4.0-rc2.  It does still 
>> work with Python 3.
>>
>> I'm going to go back to rc1 and see if the problem goes away.
>>
>> -Chris
>>
>>

[julia-users] Add documentation for overrides of Base functions?

2015-09-20 Thread Sheehan Olver
I tried adding the following

```julia
doc"""
length(foo) -> Integer

returns the length of foo
"""
Base.length(foo::Foo)
```

inside a module, but it doesn't show up in 

```julia
?length
```

Any suggestions?  And is the `doc""" ..."""...` syntax documented somewhere?







[julia-users] Re: Interview with the Julia language creators in The Programmer magazine (Chinese)

2015-09-20 Thread Viral Shah
Is it difficult to access JuliaBox in China because of Google 
authentication?

-viral

On Saturday, September 19, 2015 at 7:12:35 PM UTC+5:30, Roger Luo wrote:
>
> Hi Jiahao Chen,
> I'm an undergraduate student in University of Science and Technology of 
> China, and working on Bohm trajectories in the key laboratory of quantum 
> information of CAS. After I used Julia0.3 in calculating the Bohmian 
> mechanics I think it's much better than cpp or python in Science.
>
> I heard from Hao Xu,the drone guy, who said he was with your guys in 
> Boston last year,that you are one of the developer. And it's pretty weird 
> that I didn't find any Chinese community on the julialang.org since there 
> is other communities. And few people use this language in my university (at 
> least among people I know)
>
> Is there any Chinese User Group in China? I just started a student club in 
> USTC(University of Science and Technology of China) called USTC Julia User 
> Group, and if there is a community mail-list about Chinese users and if 
> there a Chinese community I hope we can stay in touch. And I think it would 
> be more convenient for Chinese users to ask questions in Chinese.
>
> BTW,is there any possibility to start a juliabox service in inner-China? I 
> found accessing this is really hard in inner-China. If it's possible, I can 
> help to establish one in my university with members in lug.(
> https://lug.ustc.edu.cn/wiki/) but I think I would need help cause I only 
> used Julia to program for my questions in the lab before.
>
> 在 2014年3月14日星期五 UTC+8上午10:06:49,Jiahao Chen写道:
>>
>>
>> 摘要:一群科学家对现有计算工具感到不满:他们想要一套开源系统,有C的快速,Ruby的动态,Python的通用,R般在统计分析上得心应手,Perl的处理字符串处理,Matlab的线性代数运算能力……易学又不让真正的黑客感到无聊。
>>
>> Abstract: a group of scientists are dissatisfied with existing 
>> computational tools: they wish to have an original/pioneering system,
>> with the speed of C, the dynamism of Ruby, the useability/widespread
>> use of Python, the ease of statistical analysis à la R, the ability to 
>> process strings like Perl, the ability to do linear algebra operations like 
>> Matlab... to be easy to learn, yet not be boring to real hackers.
>>
>> http://www.csdn.net/article/2014-03-12/2818732
>>
>

[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread 'Greg Plowman' via julia-users
Hi,

I tried Pkg.pin("JuliaParser", v"0.1.2") but now I get the following error 
(multiple times).

Before this JuliaParser was at version v0.6.3, are you sure we should try 
reverting to v0.1.2?


WARNING: LightTable.jl: `skipws` has no method matching skipws(::TokenStream
)
 in scopes at C:\Users\Greg\.julia\v0.3\Jewel\src\parse\scope.jl:148
 in codemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\parse/parse.jl:141
 in filemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\module.jl:93
 in anonymous at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable\misc.jl:5
 in handlecmd at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.
jl:65
 in handlenext at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.
jl:81
 in server at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:
22
 in server at C:\Users\Greg\.julia\v0.3\Jewel\src\Jewel.jl:18
 in include at boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at client.jl:285
 in _start at client.jl:354



Any other suggestions?

--Greg


On Sunday, September 20, 2015 at 6:16:10 PM UTC+10, Michael Hatherly wrote:

> The type cannot be constructed error should be fixed on 0.3 by 
> https://github.com/jakebolewski/JuliaParser.jl/pull/25. In the mean time 
> you could Pkg.pin("JuliaParser", v"0.1.2") and see if that fixes the 
> problem on Julia 0.3. (Or a version earlier than v"0.1.2" if needed.)
>
> I’ve come across the cannot resize array with shared data error a while 
> ago with the Atom-based Juno. It was fixed by Pkg.checkouting all the 
> involved packages. Might be the same for the LightTable-base Juno, worth a 
> try maybe.
>
> — Mike
> On Saturday, 19 September 2015 19:09:22 UTC+2, Serge Santos wrote:
>>
>> I tried to solve the problem by running Julia 0.4.0-rc2 instead of Julia 
>> 0.3.11. I manage to execute a few commands in Juno, but juno/julia is stuck 
>> as before. The error message is slightly different though:
>>
>>
>>- 
>>
>>WARNING: LightTable.jl: cannot resize array with shared data
>> in push! at array.jl:430
>> in read_operator at 
>> C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:368
>> in next_token at C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:752
>> in qualifiedname at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:59
>> in nexttoken at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:78
>> in nextscope! at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:116
>> in scopes at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:149
>> [inlined code] from C:\Users\Serge\.julia\v0.4\Lazy\src\macros.jl:141
>> in codemodule at C:\Users\Serge\.julia\v0.4\Jewel\src\parse/parse.jl:8
>> in getmodule at C:\Users\Serge\.julia\v0.4\Jewel\src\eval.jl:42
>> in anonymous at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable\eval.jl:51
>> in handlecmd at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:65
>> in handlenext at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:81
>> in server at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:22
>> in server at C:\Users\Serge\.julia\v0.4\Jewel\src\Jewel.jl:18
>> in include at boot.jl:261
>> in include_from_node1 at loading.jl:304
>> in process_options at client.jl:308
>> in _start at client.jl:411
>>
>>
>>
>> On Saturday, 19 September 2015 10:40:49 UTC+1, JKPie wrote:
>>>
>>> I have the same problem, I have spent couple of hours reinstalling Julia 
>>> and Juno on Windows and Linux with no result. The code works fine, when I 
>>> call it from command line directly. 
>>> Please help it is freezing my work :/
>>> J
>>>
>>

[julia-users] Re: Couldnt connect to Julia @doc not defined

2015-09-20 Thread Michael Hatherly


Tracked down to this, https://github.com/JuliaLang/Compat.jl/pull/126, 
Compat.jl change.

— Mike

On Sunday, 20 September 2015 11:43:36 UTC+2, nisha.j…@west.cmu.edu wrote:

Hello,
>
> I am not able to run from LightTable but it runs fine from the terminal
> I already tried Pkg.update() and also updated all outdated plugins.
>  
> Any ideas why??
>
> ERROR: @doc not defined
>  in include at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in include_from_node1 at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in reload_path at loading.jl:152
>  in _require at loading.jl:67
>  in require at loading.jl:54
>  in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
>  in include at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in include_from_node1 at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in reload_path at loading.jl:152
>  in _require at loading.jl:67
>  in require at loading.jl:54
>  in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
>  in include at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in include_from_node1 at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in reload_path at loading.jl:152
>  in _require at loading.jl:67
>  in require at loading.jl:54
>  in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
>  in include at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in include_from_node1 at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in include at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in include_from_node1 at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in reload_path at loading.jl:152
>  in _require at loading.jl:67
>  in require at loading.jl:51
>  in include at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in include_from_node1 at loading.jl:128
>  in process_options at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>  in _start at 
> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
> while loading /Users/Nisha/.julia/v0.3/ColorTypes/src/ColorTypes.jl, in 
> expression starting on line 86
> while loading /Users/Nisha/.julia/v0.3/Colors/src/Colors.jl, in expression 
> starting on line 5
> while loading /Users/Nisha/.julia/v0.3/Compose/src/Compose.jl, in 
> expression starting on line 5
> while loading /Users/Nisha/.julia/v0.3/Jewel/src/profile/profile.jl, in 
> expression starting on line 3
> while loading /Users/Nisha/.julia/v0.3/Jewel/src/Jewel.jl, in expression 
> starting on line 15
> while loading /Users/Nisha/Library/Application 
> Support/LightTable/plugins/Julia/jl/init.jl, in expression starting on line 
> 27
>
​


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
The biggest problem seems to be the main loop in calc_net.jl:

for idx = 1:numO

o = o_vec[idx]

# conditions in which something occurs
cond_A_su = (o .< b_hist[:,3]) & (b_hist[:,1] .== 65)
cond_B_su = (o .> b_hist[:,3]) & (b_hist[:,1] .== 66)

# conditions in which something occurs
cond_A_pu = (o .== b_hist[:,3]) & (b_hist[:,1] .== 65)
cond_B_pu = (o .== b_hist[:,3]) & (b_hist[:,1] .== 66)

# sum stuff according to conditions
b_A_su_[idx] = sum(b_hist_col2 .* cond_A_su)
b_B_su_[idx] = sum(b_hist_col2 .* cond_B_su)

# sum stuff according to conditions
b_A_pu_[idx] = sum(b_hist_col2 .* cond_A_pu)
b_B_pu_[idx] = sum(b_hist_col2 .* cond_B_pu)

end # idx


Every single instance of  .* , .== and .> creates a new for-loop in the 
machine code produced by Julia and each one requires the creation of a new 
temporary variable. In Matlab this is a necessary evil because native 
Matlab loops are so slow. But Julia loops are fast, so you should write 
this code the way you would write it in C or Fortran:

# sum stuff according to conditions
b_A_su_[idx] = 0
b_B_su_[idx] = 0

# sum stuff according to conditions
b_A_pu_[idx] = 0
b_B_pu_[idx] = 0
for j in 1:length(b_hist[:,3])
# conditions in which something occurs
cond_A_su = (o < b_hist[j,3]) & (b_hist[j,1] .== 65)
cond_B_su = (o > b_hist[j,3]) & (b_hist[j,1] .== 66)

# conditions in which something occurs
cond_A_pu = (o == b_hist[j,3]) & (b_hist[j,1] .== 65)
cond_B_pu = (o == b_hist[j,3]) & (b_hist[j,1] .== 66)

# sum stuff according to conditions
b_A_su_[idx] += b_hist_col2[j] * cond_A_su
b_B_su_[idx] += b_hist_col2[j] * cond_B_su

# sum stuff according to conditions
b_A_pu_[idx] += b_hist_col2[j] * cond_A_pu
b_B_pu_[idx] += b_hist_col2[j] * cond_B_pu
end


On my computer, this change reduces the execution time and memory 
allocation (both) by a factor of 3. So this change alone does not make the 
program fast, but I think it is illustrative of the type of changes that 
are needed. Another place I would look at is:

  # retrieve some history
  b_hist = reshape(sim["b"][d, 1:t, i, :], t, 3)
  b_hist_col2 = reshape(sim["b"][d, 1:t, i, 2], t, 1)
  b_hist_col2_A = b_hist_col2 .* (b_hist[:,1] .== 65)
  b_hist_col2_B = b_hist_col2 .* (b_hist[:,1] .== 66)


The first two lines require memory allocation and might also have a bad 
memory profile. I can't rewrite it because I'm not sure what the program is 
supposed to do. The last two lines are another set of for-loops that could 
be bundled together like I showed you.

In general, I think you can improve the code by switching to for-loops (as 
you would in C or C++) and memory allocation.

Cheers,
Daniel.



On Saturday, 19 September 2015 19:50:50 UTC+2, Adam wrote:
>
> Hi, I'm a novice to Julia but have heard promising things and wanted to 
> see if the language can help with a problem I'm working on. I have some 
> Matlab code with some user-defined functions that runs a simulation in 
> about ~1.4 seconds in Matlab (for a certain set of parameters that 
> characterize a small problem instance). I translated the code into Julia, 
> and to my surprise the Julia code runs 5x to 30x slower than the Matlab 
> code. I'll be running this code on much larger problem instances many, many 
> times (within some other loops), so performance is important here. 
>
> I created a GitHub gist that contains a stripped-down version of the Julia 
> code that gets as close to (as I can find) the culprit of the problem. The 
> gist is here: https://gist.github.com/anonymous/010bcbda091381b0de9e. A 
> quick description: 
>
>- set_up.jl sets up parameters and other items.
>- set_up_sim.jl sets up items particular to the simulation.
>- simulation.jl runs the simulation.
>- calc_net.jl, dist_o_i.jl, and update_w.jl are user-defined functions 
>executed in the simulation. 
>
>
> On my laptop (running in Juno with Julia version 0.3.10), this code yields:
> elapsed time: 43.269609577 seconds (20297989440 bytes allocated, 38.77% gc 
> time)
> elapsed time: 38.500054653 seconds (20291872804 bytes allocated, 40.41% gc 
> time)
> elapsed time: 40.238907235 seconds (20291869252 bytes allocated, 39.44% gc 
> time)
>
> Why is so much memory used, and why is so much time spent in garbage 
> collection?
>
> I'm familiar with 
> http://docs.julialang.org/en/release-0.3/manual/performance-tips/ and 
> have tried to follow these tips to the best of my knowledge. One example of 
> what might be seen as low-hanging fruit: I tried removing the type 
> declarations from my functions, but this actually increased the run-time of 
> the code by a few seconds. Also, other permutations of the column orders 
> pertaining to D, T, and I led to slower performance. 
>
> I'm sure there are several issues at play here-- I'm just using Julia for 
> the first time. Any tips would 

Re: [julia-users] Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Viral Shah
Could you provide the updated code in a github repo than separate files in 
a gist? Much easier to just clone it and run with the latest changes. The 
gist still has the Dicts.

-viral

On Sunday, September 20, 2015 at 12:00:15 PM UTC+5:30, Adam wrote:
>
> Thanks for the comments! Based on both your comments, I've made the 
> following changes:
>
>1. I eliminated the dictionary structure (I only did this as a matter 
>of organization). There are no longer any dictionaries, and those 
>"parameters" and "distributions" are all now passed directly as function 
>arguments. 
>2. I eliminated the type declarations from my function arguments 
>(since there are no longer any dictionaries).  
>3. I surrounded the majority of the code with a "run_sim()" function. 
>Now, it looks like this:
>
> include("dist_o_i.jl")
> include("calc_net.jl")
> include("update_w.jl")
> include("set_up_sim.jl")
> include("simulation.jl")
>
> function run_sim( )
>   include("set_up.jl")
>   v1 = [-3 0.5 0.5]
>
>   term_condition = false
>   iter = 0
>
>   while !term_condition
>
> v2 = [-1.5 0 1.5 0]
>
> s_array, w_array, b_array = simulation( v1, v2, D, T, I_val, max_b,
>   w_max, gamma_val, controls, Prob_o, signals )
>
> iter = iter + 1
>
> if iter >= 1
>   term_condition = true
> end
>
>   end
> end
>
> @time run_sim()
>
>
> Unfortunately, these adjustments have not changed the performance. Running 
> twice, I got:
>
> elapsed time: 39.365116756 seconds (20316825420 bytes allocated, 37.92% gc 
> time)
> elapsed time: 39.856962087 seconds (20239718784 bytes allocated, 38.50% gc 
> time)
>
> Spencer: Re: your point about running "simulation.jl" before timing 
> it...yes. That's the reason I provided multiple "elapsed time" results, to 
> show that I'm running the function a few times and getting the same 
> performance. With my latest alterations, I'm running "@time run_sim()" 
> multiple times, and seeing (roughly) the same performance each time. 
>
> Any other ideas? Is there something about the matrix or vector 
> multiplication that could be slowing it down? Maybe it has to do with the 
> arrays in which I'm storing the data? 
>
>
>
> On Saturday, September 19, 2015 at 1:18:54 PM UTC-5, Spencer Russell wrote:
>>
>> Hi Adam,
>>
>> Welome to Julia! 
>>
>> Just to check - are you calling your `simulation(…)` function before you 
>> time it? Otherwise your timing will include the time it takes to compile 
>> the function, which happens the first time it’s called with a given set of 
>> argument types.
>>
>> Your `param` dict is of type (Any, Any), which means that the compiler 
>> doesn’t know anything about the type of objects inside it. You’re doing a 
>> lot of access into the dict in your inner loops, which causes similar 
>> issues to accessing global variables. It will probably speed things up if 
>> you declare it at `params{String, Float64}()` instead (or whatever the type 
>> of the parameters is), or give the parameters as arguments to your 
>> simulation function (probably best).
>>
>> -s
>>
>> On Sep 19, 2015, at 12:44 PM, Adam  wrote:
>>
>> Hi, I'm a novice to Julia but have heard promising things and wanted to 
>> see if the language can help with a problem I'm working on. I have some 
>> Matlab code with some user-defined functions that runs a simulation in 
>> about ~1.4 seconds in Matlab (for a certain set of parameters that 
>> characterize a small problem instance). I translated the code into Julia, 
>> and to my surprise the Julia code runs 5x to 30x slower than the Matlab 
>> code. I'll be running this code on much larger problem instances many, many 
>> times (within some other loops), so performance is important here. 
>>
>> I created a GitHub gist that contains a stripped-down version of the 
>> Julia code that gets as close to (as I can find) the culprit of the 
>> problem. The gist is here: 
>> https://gist.github.com/anonymous/010bcbda091381b0de9e. A quick 
>> description: 
>>
>>- set_up.jl sets up parameters and other items.
>>- set_up_sim.jl sets up items particular to the simulation.
>>- simulation.jl runs the simulation.
>>- calc_net.jl, dist_o_i.jl, and update_w.jl are user-defined 
>>functions executed in the simulation. 
>>
>>
>> On my laptop (running in Juno with Julia version 0.3.10), this code 
>> yields:
>> elapsed time: 43.269609577 seconds (20297989440 bytes allocated, 38.77% 
>> gc time)
>> elapsed time: 38.500054653 seconds (20291872804 bytes allocated, 40.41% 
>> gc time)
>> elapsed time: 40.238907235 seconds (20291869252 bytes allocated, 39.44% 
>> gc time)
>>
>> Why is so much memory used, and why is so much time spent in garbage 
>> collection?
>>
>> I'm familiar with 
>> http://docs.julialang.org/en/release-0.3/manual/performance-tips/ and 
>> have tried to follow these tips to the best of my knowledge. One example of 
>> what might be seen as low-hanging fruit: I tried removing the type 

[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread Michael Hatherly


The type cannot be constructed error should be fixed on 0.3 by 
https://github.com/jakebolewski/JuliaParser.jl/pull/25. In the mean time 
you could Pkg.pin("JuliaParser", v"0.1.2") and see if that fixes the 
problem on Julia 0.3. (Or a version earlier than v"0.1.2" if needed.)

I’ve come across the cannot resize array with shared data error a while ago 
with the Atom-based Juno. It was fixed by Pkg.checkouting all the involved 
packages. Might be the same for the LightTable-base Juno, worth a try maybe.

— Mike
On Saturday, 19 September 2015 19:09:22 UTC+2, Serge Santos wrote:
>
> I tried to solve the problem by running Julia 0.4.0-rc2 instead of Julia 
> 0.3.11. I manage to execute a few commands in Juno, but juno/julia is stuck 
> as before. The error message is slightly different though:
>
>
>- 
>
>WARNING: LightTable.jl: cannot resize array with shared data
> in push! at array.jl:430
> in read_operator at 
> C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:368
> in next_token at C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:752
> in qualifiedname at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:59
> in nexttoken at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:78
> in nextscope! at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:116
> in scopes at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:149
> [inlined code] from C:\Users\Serge\.julia\v0.4\Lazy\src\macros.jl:141
> in codemodule at C:\Users\Serge\.julia\v0.4\Jewel\src\parse/parse.jl:8
> in getmodule at C:\Users\Serge\.julia\v0.4\Jewel\src\eval.jl:42
> in anonymous at C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable\eval.jl:51
> in handlecmd at 
> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:65
> in handlenext at 
> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:81
> in server at 
> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:22
> in server at C:\Users\Serge\.julia\v0.4\Jewel\src\Jewel.jl:18
> in include at boot.jl:261
> in include_from_node1 at loading.jl:304
> in process_options at client.jl:308
> in _start at client.jl:411
>
>
>
> On Saturday, 19 September 2015 10:40:49 UTC+1, JKPie wrote:
>>
>> I have the same problem, I have spent couple of hours reinstalling Julia 
>> and Juno on Windows and Linux with no result. The code works fine, when I 
>> call it from command line directly. 
>> Please help it is freezing my work :/
>> J
>>
>

[julia-users] Re: Add documentation for overrides of Base functions?

2015-09-20 Thread Michael Hatherly


Any suggestions?

What version are you on? I think I’ve already fixed this one and it was 
backported in RC2. It’s working for me in 0.4.0-rc2 and on master.

And is the doc""" ..."""... syntax documented somewhere?

I recently added a section that should cover everything that’s available, 
here http://docs.julialang.org/en/latest/manual/documentation/#syntax-guide.

— Mike
​

On Sunday, 20 September 2015 08:17:32 UTC+2, Sheehan Olver wrote:
>
> I tried adding the following
>
> ```julia
> doc"""
> length(foo) -> Integer
>
> returns the length of foo
> """
> Base.length(foo::Foo)
> ```
>
> inside a module, but it doesn't show up in 
>
> ```julia
> ?length
> ```
>
> Any suggestions?  And is the `doc""" ..."""...` syntax documented 
> somewhere?
>
>
>
>
>
>

[julia-users] Pkg.build("IJulia") failed with rc2

2015-09-20 Thread Chris Stook
After updating to rc2, Pkg.build("IJulia") resulted in an infinite cycle of 
errors.  After watching the errors for a a few minutes I hit ctrl-C a few 
times to stop.  I then tried to build WinRPM.  This also resulted is a 
cycle of errors.  Next I quit julia, deleted the v0.4 directory, restarted 
julia and Pkg.add("IJulia").  This also resulted in a cycle of errors.

Now jupiter will not run for Julia 0.3.11 or 0.4.0-rc2.  It does still work 
with Python 3.

I'm going to go back to rc1 and see if the problem goes away.

-Chris



[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread Michael Hatherly


Before this JuliaParser was at version v0.6.3, are you sure we should try 
reverting to v0.1.2?

See the tagged versions 
https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the next 
latest tagged version. You could probably checkout a specific commit prior 
to the commit that’s causing the breakage instead though.

What version of Jewel.jl and LightTable.jl are you using?

— Mike
​

On Sunday, 20 September 2015 10:56:22 UTC+2, Greg Plowman wrote:
>
> Hi,
>
> I tried Pkg.pin("JuliaParser", v"0.1.2") but now I get the following 
> error (multiple times).
>
> Before this JuliaParser was at version v0.6.3, are you sure we should try 
> reverting to v0.1.2?
>
>
> WARNING: LightTable.jl: `skipws` has no method matching skipws(::
> TokenStream)
>  in scopes at C:\Users\Greg\.julia\v0.3\Jewel\src\parse\scope.jl:148
>  in codemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\parse/parse.jl:141
>  in filemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\module.jl:93
>  in anonymous at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable\misc.jl:5
>  in handlecmd at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable
> .jl:65
>  in handlenext at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/
> LightTable.jl:81
>  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.jl
> :22
>  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\Jewel.jl:18
>  in include at boot.jl:245
>  in include_from_node1 at loading.jl:128
>  in process_options at client.jl:285
>  in _start at client.jl:354
>
>
>
> Any other suggestions?
>
> --Greg
>
>
> On Sunday, September 20, 2015 at 6:16:10 PM UTC+10, Michael Hatherly wrote:
>
>> The type cannot be constructed error should be fixed on 0.3 by 
>> https://github.com/jakebolewski/JuliaParser.jl/pull/25. In the mean time 
>> you could Pkg.pin("JuliaParser", v"0.1.2") and see if that fixes the 
>> problem on Julia 0.3. (Or a version earlier than v"0.1.2" if needed.)
>>
>> I’ve come across the cannot resize array with shared data error a while 
>> ago with the Atom-based Juno. It was fixed by Pkg.checkouting all the 
>> involved packages. Might be the same for the LightTable-base Juno, worth a 
>> try maybe.
>>
>> — Mike
>> On Saturday, 19 September 2015 19:09:22 UTC+2, Serge Santos wrote:
>>>
>>> I tried to solve the problem by running Julia 0.4.0-rc2 instead of Julia 
>>> 0.3.11. I manage to execute a few commands in Juno, but juno/julia is stuck 
>>> as before. The error message is slightly different though:
>>>
>>>
>>>- 
>>>
>>>WARNING: LightTable.jl: cannot resize array with shared data
>>> in push! at array.jl:430
>>> in read_operator at 
>>> C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:368
>>> in next_token at C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:752
>>> in qualifiedname at 
>>> C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:59
>>> in nexttoken at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:78
>>> in nextscope! at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:116
>>> in scopes at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:149
>>> [inlined code] from C:\Users\Serge\.julia\v0.4\Lazy\src\macros.jl:141
>>> in codemodule at C:\Users\Serge\.julia\v0.4\Jewel\src\parse/parse.jl:8
>>> in getmodule at C:\Users\Serge\.julia\v0.4\Jewel\src\eval.jl:42
>>> in anonymous at 
>>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable\eval.jl:51
>>> in handlecmd at 
>>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:65
>>> in handlenext at 
>>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:81
>>> in server at 
>>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:22
>>> in server at C:\Users\Serge\.julia\v0.4\Jewel\src\Jewel.jl:18
>>> in include at boot.jl:261
>>> in include_from_node1 at loading.jl:304
>>> in process_options at client.jl:308
>>> in _start at client.jl:411
>>>
>>>
>>>
>>> On Saturday, 19 September 2015 10:40:49 UTC+1, JKPie wrote:

 I have the same problem, I have spent couple of hours reinstalling 
 Julia and Juno on Windows and Linux with no result. The code works fine, 
 when I call it from command line directly. 
 Please help it is freezing my work :/
 J

>>>

[julia-users] What does the `|>` operator do? (possibly a Gtk.jl question)

2015-09-20 Thread Daniel Carrera
Looking at the code examples from Gtk.jl I found this code example:

w = Gtk.@Window() |>
(f = Gtk.@Box(:h) |>
(b = Gtk.@Button("1")) |>
(c = Gtk.@Button("2")) |>
(f2 = Gtk.@Box(:v) |>
  Gtk.@Label("3") |>
  Gtk.@Label("4"))) |>
showall


This is just a compact way to create a Gtk window and put some objects in 
it. But I had never seen that `|>` operator before, and I can't figure out 
what it's doing. Is this operator somehow unique to Gtk.jl ? It can see 
that they use it to nest widgets inside containers, but it's not clear to 
me how it does it.

Cheers,
Daniel.


[julia-users] Re: ANN: A potential new Discourse-based Julia forum

2015-09-20 Thread Eric Forgy
I gave it a spin. I didn't see any disadvantage relative to Google Groups 
and see big upside potential. I would support a move.

I like that I can authenticate with Github.

Beware, for the first post, it limits the number of links to "2". Since I 
was just experimenting, I had 4 links. It was a little frustrating having 
to delete them, but not a big deal and it makes sense to limit the number 
of links on the first post, so I think that is actually a good feature. 
Just warning others to not put more than 2 links :)

On Sunday, September 20, 2015 at 8:16:36 AM UTC+8, Jonathan Malmaud wrote:
>
> Hi all,
> There's been some chatter about maybe switching to a new, more modern 
> forum platform for Julia that could potentially subsume julia-users, 
> julia-dev, julia-stats, julia-gpu, and julia-jobs.   I created 
> http://julia.malmaud.com for us to try one out and see if we like it. 
> Please check it out and leave feedback. All the old posts from julia-users 
> have already been imported to it.
>
> It is using Discourse , the same forum 
> software used for the forums of Rust , 
> BoingBoing, and some other big sites. Benefits over Google Groups include 
> better support for topic tagging, community moderation features,  Markdown 
> (and hence syntax highlighting) in messages, inline previews of linked-to 
> Github issues, better mobile support, and more options for controlling when 
> and what you get emailed. The Discourse website 
>  does a better job of summarizing the 
> advantages than I could.
>
> To get things started, MIke Innes suggested having a topic on what we 
> plan on working on this coming wee 
> k.
>  
> I think that's a great idea.
>
> Just to be clear, this isn't "official" in any sense - it's just to 
> kickstart the discussion. 
>
> -Jon
>
>
>

[julia-users] Couldnt connect to Julia @doc not defined

2015-09-20 Thread nisha . jagtiani
Hello,

I am not able to run from LightTable but it runs fine from the terminal
I already tried Pkg.update() and also updated all outdated plugins.
 
Any ideas why??

ERROR: @doc not defined
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:51
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at loading.jl:128
 in process_options at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in _start at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
while loading /Users/Nisha/.julia/v0.3/ColorTypes/src/ColorTypes.jl, in 
expression starting on line 86
while loading /Users/Nisha/.julia/v0.3/Colors/src/Colors.jl, in expression 
starting on line 5
while loading /Users/Nisha/.julia/v0.3/Compose/src/Compose.jl, in 
expression starting on line 5
while loading /Users/Nisha/.julia/v0.3/Jewel/src/profile/profile.jl, in 
expression starting on line 3
while loading /Users/Nisha/.julia/v0.3/Jewel/src/Jewel.jl, in expression 
starting on line 15
while loading /Users/Nisha/Library/Application 
Support/LightTable/plugins/Julia/jl/init.jl, in expression starting on line 
27


[julia-users] couldn't connect to julia

2015-09-20 Thread nisha . jagtiani
On starting up LightTable , it gives me the following error :

I already tried Pkg.update() and update outdated plugins. I can run it 
properly from the shell but not from LightTable.
Any ideas why??

ERROR: @doc not defined
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:54
 in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in reload_path at loading.jl:152
 in _require at loading.jl:67
 in require at loading.jl:51
 in include at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in include_from_node1 at loading.jl:128
 in process_options at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
 in _start at 
/Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
while loading /Users/Nisha/.julia/v0.3/ColorTypes/src/ColorTypes.jl, in 
expression starting on line 86
while loading /Users/Nisha/.julia/v0.3/Colors/src/Colors.jl, in expression 
starting on line 5
while loading /Users/Nisha/.julia/v0.3/Compose/src/Compose.jl, in 
expression starting on line 5
while loading /Users/Nisha/.julia/v0.3/Jewel/src/profile/profile.jl, in 
expression starting on line 3
while loading /Users/Nisha/.julia/v0.3/Jewel/src/Jewel.jl, in expression 
starting on line 15
while loading /Users/Nisha/Library/Application 
Support/LightTable/plugins/Julia/jl/init.jl, in expression starting on line 
27



[julia-users] Re: ANN: NullableArrays.jl package

2015-09-20 Thread David Gold
@Valentin

Yes. In general: (i) it is simpler to write performant code for 
`NullableArray` objects than it is for `DataArray` objects; (ii) where 
applicable, passing a `NullableArray` argument into an extant 
`AbstractArray` interface tends to yield better performance than does 
passing in a `DataArray` argument. I've written a blog post about the 
project for the Julia Summer of Code that provides a couple of comparison 
data points. It should be published relatively soon.

On Saturday, September 19, 2015 at 4:47:48 PM UTC-7, Valentin Churavy wrote:
>
> Congratulations. I am looking forward to using it. Are the performance 
> characteristics as good as expected (compared to DataArray?) 
>
> On Sunday, 20 September 2015 02:38:08 UTC+9, David Gold wrote:
>>
>> I'm happy to announce that a tagged and registered beta release of 
>> NullableArrays.jl  is 
>> now available for use with the Julia 0.4 release candidates. This is the 
>> latest stage of my work for the Julia Summer of Code '15 program, and I 
>> hope to continue to be involved in its development. I'd be remiss not to 
>> thank Alan Edelman's group at MIT, NumFocus, and the Gordon & Betty Moore 
>> Foundation for their financial support, John Myles White for his mentorship 
>> and guidance, and many others of the Julia community who have helped to 
>> contribute both to the package and to my edification as a programmer over 
>> the summer.
>>
>> NullableArrays.jl provides the NullableArray{T, N} type, which is a 
>> "struct-of-array" alternative to Array{Nullable{T}, N}. Our main concern in 
>> developing this package has been to replace DataArrays.jl in providing 
>> support for the representation of missing values in statistical computing. 
>> However, the NullableArray type should be useful in any implementation that 
>> would otherwise involve an Array of Nullable objects, since its 
>> struct-of-array design allows for a number of optimizations. The 
>> NullableArray type takes advantage of the AbstractArray interface and the 
>> package includes specialized implementations for methods such as map, 
>> reduce, broadcast, and a number of Vector-specific methods.
>>
>> Documentation is currently available in the README and through the Julia 
>> REPL's help mode, with centralized online documentation coming shortly.
>>
>> I welcome you all to try out the package, file bug reports and feature 
>> requests, and, if you are so inclined, submit PRs. Happy coding!
>>
>> David
>>
>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Kristoffer Carlsson
Did you run the code twice to not time the JIT compiler?

For me, my version runs in 0.24 and Daniels in 0.34.

Anyway, adding this to Daniels 
version: https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it 
run in 0.13 seconds for me.




On Sunday, September 20, 2015 at 7:28:09 PM UTC+2, Adam wrote:
>
> Thanks for the comments! Daniel and Kristoffer, I ran each of your code on 
> my machine. Daniel's ran in 2.0 seconds on my laptop (~26% of time in gc); 
> Kristoffer's ran in about 7 seconds on my laptop (~21% of time in gc). I'm 
> not sure why Kristoffer's took so much longer to run for me than it did for 
> him-- perhaps his machine is significantly better. Or maybe it's because 
> I'm running in Juno and not from the command line?
>
> As it stands, I can get the code to perform nearly as well as Matlab with 
> Daniel's code and my latest version: 
> https://gist.github.com/anonymous/cee196ee43cb9bf1c8b6 (note, when 
> running I fixed the omission of "Prob_o" as an argument for "update_w", 
> which is the right thing to do but saw no speed improvement). I think the 
> primary difference is that Daniel defines types to store the parameters 
> etc. (which more closely matches my original code), while my latest version 
> passes all function arguments directly. On my laptop, these run in the 
> neighborhood of 2-3 seconds (vs. 1.4 seconds for the Matlab code). 
>
> Any thoughts on how I can beat the Matlab code? I would prefer to not 
> (yet) get into possibilities like parallelization of the "i" for loop in 
> "simulation.jl", since while I'm sure any parallelization would speed up 
> the code, my impression is that Julia should be able to trump Matlab even 
> before getting into anything like that. 
>
> P.S.- In my latest version of the code (again, here: 
> https://gist.github.com/anonymous/cee196ee43cb9bf1c8b6), in line 44 of 
> run_sim() I have:
> #return s_array, w_array, b_array
> when I uncomment that line and run the code, I receive an error saying 
> s_array doesn't exit. Can someone tell me why? I checked, and the 
> "simulation" function indeed creates s_array as output, so I'm not sure why 
> "run_sim" won't return it. 
>
>
> On Sunday, September 20, 2015 at 10:49:06 AM UTC-5, Daniel Carrera wrote:
>>
>>
>> On 20 September 2015 at 17:39, STAR0SS  wrote:
>>
>>> The biggest problem right now is that Prob_o is global in calc_net. You 
>>> need to pass it as an argument too. It's one of the drawback of having 
>>> everything global by default, this kind of mistakes are sometimes hard to 
>>> spot.
>>>
>>
>> But... calc_net does not use Prob_o ... ?
>>
>>  
>>
>>>
>>> Otherwise the  # get things in right dimensions for calculation below at 
>>> line 55 is not necessary anymore.
>>>
>>> In these zeros(numO, 1) you don't need to put a 1, zeros(numO) gives 
>>> you a vector of length numO, unlike in matlab where is gives you a matrix.
>>>
>>>
>>
>> I cleaned up the code and updated Github. No speed difference though.
>>
>> Cheers,
>> Daniel.
>>  
>>
>>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Kristoffer Carlsson
That might make a difference because there is a lot of performance 
improvements on 0.4 (most notably the new garbage collector). PyPlot works 
fine for me on 0.4 btw.

On Sunday, September 20, 2015 at 8:29:52 PM UTC+2, Daniel Carrera wrote:
>
> Thanks.
>
> No, I'm not on 0.4 yet. I thought it wasn't stable (and I think PyPlot 
> doesn't work on it yet). I'm on 0.3.11.
>
> On 20 September 2015 at 20:28, Kristoffer Carlsson  > wrote:
>
>> https://github.com/timholy/ProfileView.jl is invaluable for performance 
>> tweaking.
>>
>> Are you on 0.4?
>>
>> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat 
>> wrote:
>>>
>>> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit : 
>>> > 
>>> > 
>>> > On 20 September 2015 at 19:43, Kristoffer Carlsson < 
>>> > kcarl...@gmail.com> wrote: 
>>> > > Did you run the code twice to not time the JIT compiler? 
>>> > > 
>>> > > For me, my version runs in 0.24 and Daniels in 0.34. 
>>> > > 
>>> > > Anyway, adding this to Daniels version: 
>>> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it 
>>> > > run in 0.13 seconds for me. 
>>> > > 
>>> > > 
>>> > 
>>> > Interesting. For me that change only makes a 10-20% improvement. On 
>>> > my laptop the program takes about 1.5s which is similar to Adam's. So 
>>> > I guess we are running on similar hardware and you are probably using 
>>> > a faster desktop. In any case, I added the change and updated the 
>>> > repository: 
>>> > 
>>> > https://github.com/dcarrera/sim 
>>> > 
>>> > Is there a good way to profile Julia code? So I have been profiling 
>>> > by inserting tic() and toc() lines everywhere. On my computer 
>>> > @profile seems to do the same thing as @time, so it's kind of useless 
>>> > if I want to find the hot spots in a program. 
>>> Sure : 
>>> http://julia.readthedocs.org/en/latest/manual/profile/ 
>>>
>>>
>>> Regards 
>>>
>>
>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
Thanks.

No, I'm not on 0.4 yet. I thought it wasn't stable (and I think PyPlot
doesn't work on it yet). I'm on 0.3.11.

On 20 September 2015 at 20:28, Kristoffer Carlsson 
wrote:

> https://github.com/timholy/ProfileView.jl is invaluable for performance
> tweaking.
>
> Are you on 0.4?
>
> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat
> wrote:
>>
>> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit :
>> >
>> >
>> > On 20 September 2015 at 19:43, Kristoffer Carlsson <
>> > kcarl...@gmail.com> wrote:
>> > > Did you run the code twice to not time the JIT compiler?
>> > >
>> > > For me, my version runs in 0.24 and Daniels in 0.34.
>> > >
>> > > Anyway, adding this to Daniels version:
>> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it
>> > > run in 0.13 seconds for me.
>> > >
>> > >
>> >
>> > Interesting. For me that change only makes a 10-20% improvement. On
>> > my laptop the program takes about 1.5s which is similar to Adam's. So
>> > I guess we are running on similar hardware and you are probably using
>> > a faster desktop. In any case, I added the change and updated the
>> > repository:
>> >
>> > https://github.com/dcarrera/sim
>> >
>> > Is there a good way to profile Julia code? So I have been profiling
>> > by inserting tic() and toc() lines everywhere. On my computer
>> > @profile seems to do the same thing as @time, so it's kind of useless
>> > if I want to find the hot spots in a program.
>> Sure :
>> http://julia.readthedocs.org/en/latest/manual/profile/
>>
>>
>> Regards
>>
>


[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread Tony Kelman
What's temporarily broken here is some of the packages that 
Light-Table-based Juno relies on to work. In the meantime you can still use 
command-line REPL Julia, and while it's not the most friendly interface 
your code will still run. Your estimation of the Julia ecosystem's 
robustness is pretty accurate though, if you really want to ensure things 
stay working the best way of doing that right now is keeping all packages 
pinned until you have a chance to thoroughly test the versions that an 
upgrade would give you. We plan on automating some of this testing going 
forward, though in the case of Juno much of the code is being replaced 
right now and the replacements aren't totally ready just yet.


On Sunday, September 20, 2015 at 10:45:01 AM UTC-7, Serge Santos wrote:
>
> Thank you all for your inputs. It tried your suggestions and, 
> unfortunately, it does not work. I tried Atom but, after a good start and 
> some success, it keeps crashing in middle of a calculation (windows 10).
>
> To summarize what I tried with Juno and julia 0.3.11:
> - Compat v.0.7.0 (pinned)
> - JuliaParser V0.1.2  (pinned)
> - Jewel v1.0.6.
>
> I get a first error message, which seems to indicate that Julia cannot 
> generate an output.
>
> *symbol could not be found jl_generating_output (-1): The specified 
> procedure could not be found.*
>
> Followed by:
>
> *WARNING: LightTable.jl: `skipws` has no method matching 
> skipws(::TokenStream)*
> * in scopes at C:\Users\Serge\.julia\v0.3\Jewel\src\parse\scope.jl:148*
> * in codemodule at C:\Users\Serge\.julia\v0.3\Jewel\src\parse/parse.jl:141*
> * in getmodule at C:\Users\Serge\.julia\v0.3\Jewel\src\eval.jl:42*
> * in anonymous at 
> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable\eval.jl:51*
> * in handlecmd at 
> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:65*
> * in handlenext at 
> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:81*
> * in server at 
> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:22*
> * in server at C:\Users\Serge\.julia\v0.3\Jewel\src\Jewel.jl:18*
> * in include at boot.jl:245*
> * in include_from_node1 at loading.jl:128*
> * in process_options at client.jl:285*
> * in _start at client.jl:354*
>
> Looking at dependencies with MetadataTools, I smply got:* nothing.* I 
> assume that Jewel does not have any dependencies.
>
> I have a lot of understanding for the effort that goes into making the 
> Julia project work and a success, but I just lost two days of my life 
> trying to make things work and I have an important deadline ahead that I am 
> likely to miss because I relied on a promising tool that, unfortunately, 
> does not seem robust enough at this stage given the amount of development 
> happening.It makes me wonder if I should not wait until Julia becomes more 
> established and robust and switch to other solutions.
>
> On Sunday, 20 September 2015 16:47:05 UTC+1, Dongning Guo wrote:
>>
>> In case you're stuck, this may be a way out:
>> I installed Atom editor and it seems Julia (v0.5??? nightly build)
>> works with it after installing a few packages.  I'm learning to use
>> the new environment ...
>> See https://github.com/JunoLab/atom-julia-client/tree/master/manual
>>
>>
>> On Sunday, September 20, 2015 at 6:59:02 AM UTC-5, Michael Hatherly wrote:
>>>
>>> I can’t see LightTable listed in Pkg.status() output in either PC
>>>
>>> The LightTable module is part of the Jewel package is seems, 
>>> https://github.com/one-more-minute/Jewel.jl/blob/fb854b0a64047ee642773c0aa824993714ee7f56/src/Jewel.jl#L22,
>>>  
>>> and so won’t show up on Pkg.status() output since it’s not a true 
>>> package by itself. Apologies for the misleading directions there.
>>>
>>> What other packages would Juno depend on?
>>>
>>> You can manually walk through the REQUIRE files to see what Jewel 
>>> depends on, or use MetadataTools to do it:
>>>
>>> julia> using MetadataTools
>>> julia> pkgmeta = get_all_pkg();
>>> julia> graph = make_dep_graph(pkgmeta);
>>> julia> deps = get_pkg_dep_graph("Jewel", graph);
>>> julia> map(println, keys(deps.p_to_i));
>>>
>>> You shouldn’t need to change versions for most, if any, of what’s listed 
>>> though. (Don’t forget to call Pkg.free on each package you pin once 
>>> newer versions of the packages are tagged.) Compat 0.7.1 should be far 
>>> enough back I think.
>>>
>>> — Mike
>>> ​
>>> On Sunday, 20 September 2015 13:29:38 UTC+2, Greg Plowman wrote:

 OK I see that second latest tag is v0.1.2 (17 June 2014). Seems a 
 strange jump.

 But now I understand pinning, I can use a strategy of rolling back 
 Juno-related packages until Juno works again.

 What other packages would Juno depend on?

 To help me in this endeavour, I have access to another PC on which Juno 
 runs (almost) without error.
 Confusingly, Pkg.status() reports JuliaParser v0.6.2 on this second PC
 Jewel is v1.0.6 on both PCs.
 I can't see LightTable listed in 

[julia-users] Re: Interview with the Julia language creators in The Programmer magazine (Chinese)

2015-09-20 Thread Sisyphuss
I tried it in China last Christmas. Anything using Google service (e.g. 
google api, google font, google ajax) is extremely slow and often 
inaccessible. 



On Sunday, September 20, 2015 at 9:00:57 AM UTC+2, Viral Shah wrote:
>
> Is it difficult to access JuliaBox in China because of Google 
> authentication?
>
> -viral
>
> On Saturday, September 19, 2015 at 7:12:35 PM UTC+5:30, Roger Luo wrote:
>>
>> Hi Jiahao Chen,
>> I'm an undergraduate student in University of Science and Technology of 
>> China, and working on Bohm trajectories in the key laboratory of quantum 
>> information of CAS. After I used Julia0.3 in calculating the Bohmian 
>> mechanics I think it's much better than cpp or python in Science.
>>
>> I heard from Hao Xu,the drone guy, who said he was with your guys in 
>> Boston last year,that you are one of the developer. And it's pretty weird 
>> that I didn't find any Chinese community on the julialang.org since 
>> there is other communities. And few people use this language in my 
>> university (at least among people I know)
>>
>> Is there any Chinese User Group in China? I just started a student club 
>> in USTC(University of Science and Technology of China) called USTC Julia 
>> User Group, and if there is a community mail-list about Chinese users and 
>> if there a Chinese community I hope we can stay in touch. And I think it 
>> would be more convenient for Chinese users to ask questions in Chinese.
>>
>> BTW,is there any possibility to start a juliabox service in inner-China? 
>> I found accessing this is really hard in inner-China. If it's possible, I 
>> can help to establish one in my university with members in lug.(
>> https://lug.ustc.edu.cn/wiki/) but I think I would need help cause I 
>> only used Julia to program for my questions in the lab before.
>>
>> 在 2014年3月14日星期五 UTC+8上午10:06:49,Jiahao Chen写道:
>>>
>>>
>>> 摘要:一群科学家对现有计算工具感到不满:他们想要一套开源系统,有C的快速,Ruby的动态,Python的通用,R般在统计分析上得心应手,Perl的处理字符串处理,Matlab的线性代数运算能力……易学又不让真正的黑客感到无聊。
>>>
>>> Abstract: a group of scientists are dissatisfied with existing 
>>> computational tools: they wish to have an original/pioneering system,
>>> with the speed of C, the dynamism of Ruby, the useability/widespread
>>> use of Python, the ease of statistical analysis à la R, the ability to 
>>> process strings like Perl, the ability to do linear algebra operations like 
>>> Matlab... to be easy to learn, yet not be boring to real hackers.
>>>
>>> http://www.csdn.net/article/2014-03-12/2818732
>>>
>>

Re: [julia-users] Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Viral Shah
Does Julia 0.4 help reduce the GC time?

-viral



> On 20-Sep-2015, at 11:13 pm, Kristoffer Carlsson  
> wrote:
> 
> Did you run the code twice to not time the JIT compiler?
> 
> For me, my version runs in 0.24 and Daniels in 0.34.
> 
> Anyway, adding this to Daniels version: 
> https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it run in 0.13 
> seconds for me.
> 
> 
> 
> 
> On Sunday, September 20, 2015 at 7:28:09 PM UTC+2, Adam wrote:
> Thanks for the comments! Daniel and Kristoffer, I ran each of your code on my 
> machine. Daniel's ran in 2.0 seconds on my laptop (~26% of time in gc); 
> Kristoffer's ran in about 7 seconds on my laptop (~21% of time in gc). I'm 
> not sure why Kristoffer's took so much longer to run for me than it did for 
> him-- perhaps his machine is significantly better. Or maybe it's because I'm 
> running in Juno and not from the command line?
> 
> As it stands, I can get the code to perform nearly as well as Matlab with 
> Daniel's code and my latest version: 
> https://gist.github.com/anonymous/cee196ee43cb9bf1c8b6 (note, when running I 
> fixed the omission of "Prob_o" as an argument for "update_w", which is the 
> right thing to do but saw no speed improvement). I think the primary 
> difference is that Daniel defines types to store the parameters etc. (which 
> more closely matches my original code), while my latest version passes all 
> function arguments directly. On my laptop, these run in the neighborhood of 
> 2-3 seconds (vs. 1.4 seconds for the Matlab code). 
> 
> Any thoughts on how I can beat the Matlab code? I would prefer to not (yet) 
> get into possibilities like parallelization of the "i" for loop in 
> "simulation.jl", since while I'm sure any parallelization would speed up the 
> code, my impression is that Julia should be able to trump Matlab even before 
> getting into anything like that. 
> 
> P.S.- In my latest version of the code (again, here: 
> https://gist.github.com/anonymous/cee196ee43cb9bf1c8b6), in line 44 of 
> run_sim() I have:
> #return s_array, w_array, b_array
> when I uncomment that line and run the code, I receive an error saying 
> s_array doesn't exit. Can someone tell me why? I checked, and the 
> "simulation" function indeed creates s_array as output, so I'm not sure why 
> "run_sim" won't return it. 
> 
> 
> On Sunday, September 20, 2015 at 10:49:06 AM UTC-5, Daniel Carrera wrote:
> 
> On 20 September 2015 at 17:39, STAR0SS  wrote:
> The biggest problem right now is that Prob_o is global in calc_net. You need 
> to pass it as an argument too. It's one of the drawback of having everything 
> global by default, this kind of mistakes are sometimes hard to spot.
> 
> But... calc_net does not use Prob_o ... ?
> 
>  
> 
> Otherwise the  # get things in right dimensions for calculation below at line 
> 55 is not necessary anymore.
> 
> In these zeros(numO, 1) you don't need to put a 1, zeros(numO) gives you a 
> vector of length numO, unlike in matlab where is gives you a matrix.
> 
> 
> 
> I cleaned up the code and updated Github. No speed difference though.
> 
> Cheers,
> Daniel.
>  
> 



[julia-users] Re: ERROR: curve_fit not defined

2015-09-20 Thread Kristoffer Carlsson
https://github.com/JuliaOpt/Optim.jl/pull/70

https://github.com/pjabardo/CurveFit.jl

On Sunday, September 20, 2015 at 7:46:53 PM UTC+2, testertester wrote:
>
> I am on Ubuntu, and my copy of julia was installed with apt-get install 
> julia.
>
> I was trying the curve fitting tutorial found here (
> http://www.walkingrandomly.com/?p=5181) but I kept getting the error 
> "curve_fit not defined". Yes, I have done "Pkg.add("Optim")" and 
> "Pkg.add("LsqFit")", that doesn't help. Apparently, nobody else on the 
> internet has ever had this problem. 
> I'm
>  
> surprised.
>
> Also, it doesn't seem like I can find the version number of Julia that I'm 
> using. Following this (
> http://stackoverflow.com/questions/25326890/how-to-find-version-number-of-julia-is-there-a-ver-command)
>  
> just gives me the error "verbose not defined". Is every piece of 
> documentation about Julia from earlier than 6/2015 invalid now? The 
> responses I found here 
> are 
> pretty useless, as curve_fit doesn't even work by itself.
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Seth
As an interim step, you can also get text profiling information using 
Profile.print() if the graphics aren't working.

On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera wrote:
>
> Hmm... ProfileView gives me an error:
>
> ERROR: panzoom not defined
>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>  in include at ./boot.jl:245
>  in include_from_node1 at ./loading.jl:128
> while loading /home/daniel/Projects/optimization/run_sim.jl, in expression 
> starting on line 55
>
> Do I need to update something?
>
> Cheers,
> Daniel.
>
> On 20 September 2015 at 20:28, Kristoffer Carlsson  > wrote:
>
>> https://github.com/timholy/ProfileView.jl is invaluable for performance 
>> tweaking.
>>
>> Are you on 0.4?
>>
>> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat 
>> wrote:
>>>
>>> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit : 
>>> > 
>>> > 
>>> > On 20 September 2015 at 19:43, Kristoffer Carlsson < 
>>> > kcarl...@gmail.com> wrote: 
>>> > > Did you run the code twice to not time the JIT compiler? 
>>> > > 
>>> > > For me, my version runs in 0.24 and Daniels in 0.34. 
>>> > > 
>>> > > Anyway, adding this to Daniels version: 
>>> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it 
>>> > > run in 0.13 seconds for me. 
>>> > > 
>>> > > 
>>> > 
>>> > Interesting. For me that change only makes a 10-20% improvement. On 
>>> > my laptop the program takes about 1.5s which is similar to Adam's. So 
>>> > I guess we are running on similar hardware and you are probably using 
>>> > a faster desktop. In any case, I added the change and updated the 
>>> > repository: 
>>> > 
>>> > https://github.com/dcarrera/sim 
>>> > 
>>> > Is there a good way to profile Julia code? So I have been profiling 
>>> > by inserting tic() and toc() lines everywhere. On my computer 
>>> > @profile seems to do the same thing as @time, so it's kind of useless 
>>> > if I want to find the hot spots in a program. 
>>> Sure : 
>>> http://julia.readthedocs.org/en/latest/manual/profile/ 
>>>
>>>
>>> Regards 
>>>
>>
>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Mauro
> As an interim step, you can also get text profiling information using 
> Profile.print() if the graphics aren't working.

You could also try https://github.com/mauro3/ProfileFile.jl which writes
the profile numbers into a file *.pro.  Similar to the memory and
coverage files.

> On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera wrote:
>>
>> Hmm... ProfileView gives me an error:
>>
>> ERROR: panzoom not defined
>>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>>  in include at ./boot.jl:245
>>  in include_from_node1 at ./loading.jl:128
>> while loading /home/daniel/Projects/optimization/run_sim.jl, in expression 
>> starting on line 55
>>
>> Do I need to update something?
>>
>> Cheers,
>> Daniel.
>>
>> On 20 September 2015 at 20:28, Kristoffer Carlsson > > wrote:
>>
>>> https://github.com/timholy/ProfileView.jl is invaluable for performance 
>>> tweaking.
>>>
>>> Are you on 0.4?
>>>
>>> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat 
>>> wrote:

 Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit : 
 > 
 > 
 > On 20 September 2015 at 19:43, Kristoffer Carlsson < 
 > kcarl...@gmail.com> wrote: 
 > > Did you run the code twice to not time the JIT compiler? 
 > > 
 > > For me, my version runs in 0.24 and Daniels in 0.34. 
 > > 
 > > Anyway, adding this to Daniels version: 
 > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it 
 > > run in 0.13 seconds for me. 
 > > 
 > > 
 > 
 > Interesting. For me that change only makes a 10-20% improvement. On 
 > my laptop the program takes about 1.5s which is similar to Adam's. So 
 > I guess we are running on similar hardware and you are probably using 
 > a faster desktop. In any case, I added the change and updated the 
 > repository: 
 > 
 > https://github.com/dcarrera/sim 
 > 
 > Is there a good way to profile Julia code? So I have been profiling 
 > by inserting tic() and toc() lines everywhere. On my computer 
 > @profile seems to do the same thing as @time, so it's kind of useless 
 > if I want to find the hot spots in a program. 
 Sure : 
 http://julia.readthedocs.org/en/latest/manual/profile/ 


 Regards 

>>>
>>



Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using
Profile.print() I found that the hottest parts of the code appeared to be
the if-conditions, such as:

if o < b_hist[j,3]

It occurred to me that this could be due to cache misses, so I rewrote the
code to store the data more compactly:

-  b_hist = reshape(sim.b[d, 1:t, i, :], t, 3)
+  b_hist_1 = sim.b[d, 1:t, i, 1]
+  b_hist_3 = sim.b[d, 1:t, i, 3]
...
-if o < b_hist[j,3]
+if o < b_hist_3[j]


So, instead of an 3xN array, I store two 1xN arrays with the data I
actually want. I suspect that the biggest improvement is not that there is
1/3 less data, but that the data just gets managed differently. The upshot
is that now the program runs 208 times faster for me than it did initially.
For me time execution time went from 45s to 0.2s.

As always, the code is updated on Github:

https://github.com/dcarrera/sim

Cheers,
Daniel.



On 20 September 2015 at 20:51, Seth  wrote:

> As an interim step, you can also get text profiling information using
> Profile.print() if the graphics aren't working.
>
> On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera wrote:
>>
>> Hmm... ProfileView gives me an error:
>>
>> ERROR: panzoom not defined
>>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>>  in include at ./boot.jl:245
>>  in include_from_node1 at ./loading.jl:128
>> while loading /home/daniel/Projects/optimization/run_sim.jl, in
>> expression starting on line 55
>>
>> Do I need to update something?
>>
>> Cheers,
>> Daniel.
>>
>> On 20 September 2015 at 20:28, Kristoffer Carlsson 
>> wrote:
>>
>>> https://github.com/timholy/ProfileView.jl is invaluable for performance
>>> tweaking.
>>>
>>> Are you on 0.4?
>>>
>>> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat
>>> wrote:

 Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit :
 >
 >
 > On 20 September 2015 at 19:43, Kristoffer Carlsson <
 > kcarl...@gmail.com> wrote:
 > > Did you run the code twice to not time the JIT compiler?
 > >
 > > For me, my version runs in 0.24 and Daniels in 0.34.
 > >
 > > Anyway, adding this to Daniels version:
 > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it
 > > run in 0.13 seconds for me.
 > >
 > >
 >
 > Interesting. For me that change only makes a 10-20% improvement. On
 > my laptop the program takes about 1.5s which is similar to Adam's. So
 > I guess we are running on similar hardware and you are probably using
 > a faster desktop. In any case, I added the change and updated the
 > repository:
 >
 > https://github.com/dcarrera/sim
 >
 > Is there a good way to profile Julia code? So I have been profiling
 > by inserting tic() and toc() lines everywhere. On my computer
 > @profile seems to do the same thing as @time, so it's kind of useless
 > if I want to find the hot spots in a program.
 Sure :
 http://julia.readthedocs.org/en/latest/manual/profile/


 Regards

>>>
>>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
I just added the 'sim' variable to the outer scope and a return value. This
incurs a small speed penalty. I also removed the remaining `reshape()` from
calc_net and that produced a small speed improvements. The two changes
roughly cancel out. On my computer I measure a 3% net improvement, which is
probably close to my margin of error.

Cheers,
Daniel.

On 20 September 2015 at 23:53, Kristoffer Carlsson 
wrote:

> sim is defined inside the while loop which means that it goes out of scope
> after the while loop end, see:
> http://julia.readthedocs.org/en/latest/manual/variables-and-scoping/
>
> If you want to run a number of sims you could for example create an empty
> vector in the start of main and push the sims into it and then return the
> vector of sims in the end of the function.
>
> On Sunday, September 20, 2015 at 11:50:18 PM UTC+2, Adam wrote:
>>
>> Thanks Daniel! That code ran in about 0.3 seconds on my machine as well.
>> More good progress! This puts Julia about ~5x faster than Matlab here.
>>
>> I tried placing "return sim" at the end of your main() function, but I
>> still got an error saying "sim not defined." Why is that? Can I return
>> output from the simulation?
>>
>> On Sunday, September 20, 2015 at 4:14:08 PM UTC-5, Kristoffer Carlsson
>> wrote:
>>>
>>> Oh, if you are on 0.3 I am not sure if slice exist. If it does, it is
>>> really slow.
>>>
>>> On Sunday, September 20, 2015 at 11:13:21 PM UTC+2, Kristoffer Carlsson
>>> wrote:

 For me, your latest changes made the time go from 0.13 -> 0.11. It is
 strange we have so different performances, but then again 0.3 and 0.4 are
 different beasts.

 Adding some calls to slice and another loop gained some perf for me.
 Can you try:

 https://gist.github.com/KristofferC/8a8ff33cb186183eea8d

 On Sunday, September 20, 2015 at 9:36:20 PM UTC+2, Daniel Carrera wrote:
>
> Just another note:
>
> I suspect that the `reshape()` might be the guilty party. I am just
> guessing here, but I suspect that the reshape() forces a memory copy, 
> while
> a regular slice just creates kind of symlink to the original data.
> Furthermore, I suspect that the memory copy would mean that when you try 
> to
> read from the newly created variable, you have to fetch it from RAM,
> despite the fact that the CPU cache already has a perfectly good copy of
> the same data.
>
> Cheers,
> Daniel.
>
>
> On 20 September 2015 at 21:25, Daniel Carrera 
> wrote:
>
>> Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using
>> Profile.print() I found that the hottest parts of the code appeared to be
>> the if-conditions, such as:
>>
>> if o < b_hist[j,3]
>>
>> It occurred to me that this could be due to cache misses, so I
>> rewrote the code to store the data more compactly:
>>
>> -  b_hist = reshape(sim.b[d, 1:t, i, :], t, 3)
>> +  b_hist_1 = sim.b[d, 1:t, i, 1]
>> +  b_hist_3 = sim.b[d, 1:t, i, 3]
>> ...
>> -if o < b_hist[j,3]
>> +if o < b_hist_3[j]
>>
>>
>> So, instead of an 3xN array, I store two 1xN arrays with the data I
>> actually want. I suspect that the biggest improvement is not that there 
>> is
>> 1/3 less data, but that the data just gets managed differently. The 
>> upshot
>> is that now the program runs 208 times faster for me than it did 
>> initially.
>> For me time execution time went from 45s to 0.2s.
>>
>> As always, the code is updated on Github:
>>
>> https://github.com/dcarrera/sim
>>
>> Cheers,
>> Daniel.
>>
>>
>>
>> On 20 September 2015 at 20:51, Seth  wrote:
>>
>>> As an interim step, you can also get text profiling information
>>> using Profile.print() if the graphics aren't working.
>>>
>>> On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera
>>> wrote:

 Hmm... ProfileView gives me an error:

 ERROR: panzoom not defined
  in view at
 /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
  in view at
 /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
  in include at ./boot.jl:245
  in include_from_node1 at ./loading.jl:128
 while loading /home/daniel/Projects/optimization/run_sim.jl, in
 expression starting on line 55

 Do I need to update something?

 Cheers,
 Daniel.

 On 20 September 2015 at 20:28, Kristoffer Carlsson <
 kcarl...@gmail.com> wrote:

> https://github.com/timholy/ProfileView.jl is invaluable for
> performance tweaking.
>
> Are you on 0.4?
>
> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan
> 

[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread 'Greg Plowman' via julia-users
Hi All,

On 2 different PCs where Juno works (almost without error) Pkg.status() 
reports JuliaParser v0.6.2
On PC that has Juno errors, Pkg.status() reports JuliaParser v0.6.3
Rolling back to JuliaParser v0.1.2 creates different errors.
So it seems we need to revert to JuliaParser v0.6.2

I'm not at a PC where I can see if we can pin v0.6.2, in light of the 
following:

Before this JuliaParser was at version v0.6.3, are you sure we should try 
>> reverting to v0.1.2?
>
> See the tagged versions 

https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the next 
> latest tagged version. You could probably checkout a specific commit prior 
> to the commit that’s causing the breakage instead though.

Also, I don't want to play around with Pkg.ANYTHING on a working 
configuration at the moment :)

-- Greg


On Monday, September 21, 2015 at 4:50:37 AM UTC+10, Tony Kelman wrote:

> What's temporarily broken here is some of the packages that 
> Light-Table-based Juno relies on to work. In the meantime you can still use 
> command-line REPL Julia, and while it's not the most friendly interface 
> your code will still run. Your estimation of the Julia ecosystem's 
> robustness is pretty accurate though, if you really want to ensure things 
> stay working the best way of doing that right now is keeping all packages 
> pinned until you have a chance to thoroughly test the versions that an 
> upgrade would give you. We plan on automating some of this testing going 
> forward, though in the case of Juno much of the code is being replaced 
> right now and the replacements aren't totally ready just yet.
>
>
> On Sunday, September 20, 2015 at 10:45:01 AM UTC-7, Serge Santos wrote:
>>
>> Thank you all for your inputs. It tried your suggestions and, 
>> unfortunately, it does not work. I tried Atom but, after a good start and 
>> some success, it keeps crashing in middle of a calculation (windows 10).
>>
>> To summarize what I tried with Juno and julia 0.3.11:
>> - Compat v.0.7.0 (pinned)
>> - JuliaParser V0.1.2  (pinned)
>> - Jewel v1.0.6.
>>
>> I get a first error message, which seems to indicate that Julia cannot 
>> generate an output.
>>
>> *symbol could not be found jl_generating_output (-1): The specified 
>> procedure could not be found.*
>>
>> Followed by:
>>
>> *WARNING: LightTable.jl: `skipws` has no method matching 
>> skipws(::TokenStream)*
>> * in scopes at C:\Users\Serge\.julia\v0.3\Jewel\src\parse\scope.jl:148*
>> * in codemodule at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\parse/parse.jl:141*
>> * in getmodule at C:\Users\Serge\.julia\v0.3\Jewel\src\eval.jl:42*
>> * in anonymous at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable\eval.jl:51*
>> * in handlecmd at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:65*
>> * in handlenext at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:81*
>> * in server at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:22*
>> * in server at C:\Users\Serge\.julia\v0.3\Jewel\src\Jewel.jl:18*
>> * in include at boot.jl:245*
>> * in include_from_node1 at loading.jl:128*
>> * in process_options at client.jl:285*
>> * in _start at client.jl:354*
>>
>> Looking at dependencies with MetadataTools, I smply got:* nothing.* I 
>> assume that Jewel does not have any dependencies.
>>
>> I have a lot of understanding for the effort that goes into making the 
>> Julia project work and a success, but I just lost two days of my life 
>> trying to make things work and I have an important deadline ahead that I am 
>> likely to miss because I relied on a promising tool that, unfortunately, 
>> does not seem robust enough at this stage given the amount of development 
>> happening.It makes me wonder if I should not wait until Julia becomes more 
>> established and robust and switch to other solutions.
>>
>> On Sunday, 20 September 2015 16:47:05 UTC+1, Dongning Guo wrote:
>>>
>>> In case you're stuck, this may be a way out:
>>> I installed Atom editor and it seems Julia (v0.5??? nightly build)
>>> works with it after installing a few packages.  I'm learning to use
>>> the new environment ...
>>> See https://github.com/JunoLab/atom-julia-client/tree/master/manual
>>>
>>>
>>> On Sunday, September 20, 2015 at 6:59:02 AM UTC-5, Michael Hatherly 
>>> wrote:

 I can’t see LightTable listed in Pkg.status() output in either PC

 The LightTable module is part of the Jewel package is seems, 
 https://github.com/one-more-minute/Jewel.jl/blob/fb854b0a64047ee642773c0aa824993714ee7f56/src/Jewel.jl#L22,
  
 and so won’t show up on Pkg.status() output since it’s not a true 
 package by itself. Apologies for the misleading directions there.

 What other packages would Juno depend on?

 You can manually walk through the REQUIRE files to see what Jewel 
 depends on, or use MetadataTools to do it:

 julia> using MetadataTools
 julia> pkgmeta = get_all_pkg();
 julia> graph = 

Re: [julia-users] Re: Julia 0.4 RC ppa?

2015-09-20 Thread Elliot Saba
The stable PPA will be updated for 0.4 final, but not with release
candidates.  Julianightlies will continue to track the master branch, that
is, 0.5-dev.  Creating per-version julia packages is a good idea, but not
one that will happen soon, due to my own time constraints.
-E

On Sun, Sep 20, 2015 at 11:33 AM, Christof Stocker <
stocker.chris...@gmail.com> wrote:

> So bottomline what does this mean for the upcoming 0.4 release. Is the deb
> not going to be updated for stable releases anymore, or are you just
> talking about managing different versions at the same time?
>
>
> On 2015-09-20 20:23, Elliot Saba wrote:
>
> Yep, Tony hit it on the head.  As the maintainer of the Ubuntu PPA, I
> definitely understand the usefulness and ease of having Julia managed by
> the system package manager, but unfortunately the build process is
> relatively difficult to debug/fix; we have to jump through quite a few
> hoops to get our source packages ready for building on the Canonical
> servers, and problems often arise and must wait a few weeks before I can
> fix them.
>
> To give you an idea of the workflow we have setup right now, the first
> step is that a job is run on the buildbots
>  called
> "package_launchpad", which gets run every time a commit passes the
> automated testing and bundles that commit up into a form ready to be
> submitted to launchpad.  The script that is run by that buildbot job is
> right here
> .
> The results of running that script are saved on the buildbot website linked
> above, here's a link to the latest run
> 
> which is the first to succeed in a while, due to some incorrect
> configuration after I rebuilt the VM images a few weeks ago in preparation
> for 0.4 releases.  One of the pieces of preparation that the script
> performs is to embed a debian directory from this repository
> , which gives the rules and
> metadata necessary to build a Debian package for Julia.
>
> As far as providing a `julia0.4` package, that is an interesting idea that
> may be the best way forward, but unfortunately I have many other projects
> that are vying for my attention right now.  If anyone reading this is
> interested in pushing forward on that particular effort, even if you don't
> have much experience working on this kind of stuff, please do not hesitate
> to contact me to get more information on how to start, or just start
> submitting pull requests/forking things.
>
> In the meantime, just like Tony said, I think the best bet is to use the
> generic linux tarballs for now.  In all honesty, there's really only one
> concrete benefit to the PPA binaries (other than the simple purity of
> having things managed by dpkg rather than being downloaded and installed to
> user-directories) and that is automatic updates.
>
> Now that I think about it; there's a possibility that a "dummy" .deb could
> be created that just downloads the latest `.tar.gz`, unpacks it into
> `/usr`, and calls it a day.
> -E
>
> On Sun, Sep 20, 2015 at 8:55 AM, Tony Kelman  wrote:
>
>> Versioning the julia package name in the ppa would be a very good idea.
>> The only reason the PPA is often out of date is that it's entirely
>> maintained by a single person who doesn't always have the time to fix
>> things that break or update things that would usually be handled
>> automatically. As I said it takes more maintenance to keep running than the
>> tarball builds, and since the PPA is Ubuntu-specific we've been encouraging
>> people to use the generic tarballs now since we have more control over
>> dependencies, public visibility to any issues that arise, and the ability
>> for multiple people to fix them. I recognize the utility in having your
>> system package manager handle updates, but it's a fair bit more maintenance
>> work. Downloading and installing a tarball to use the binaries of Julia
>> should be pretty easy, and doesn't need root access either.
>>
>>
>> On Sunday, September 20, 2015 at 8:37:57 AM UTC-7, Glen O wrote:
>>>
>>> Is there a reason why the juliareleases ppa couldn't provide a julia0.4
>>> package, separately from the currently julia package? I've seen similar
>>> things done with packages elsewhere, including within the main ubuntu
>>> repositories. Indeed, given the changes happening to the language, perhaps
>>> it's a good idea to start keeping major versions of julia separate (that
>>> is, make it julia0.3 and julia0.4, with julia being a dependency package
>>> that will pull in the latest stable julia (ie/ it will point to julia0.3
>>> until julia0.4 is properly released, then it will point to julia0.4).
>>>
>>> This also minimises issues for people who might have julia 0.3 currently
>>> installed 

[julia-users] When does colon indexing get evaluated / converted?

2015-09-20 Thread 'Greg Plowman' via julia-users
Hi,

I'm trying to define a custom Array type that can be indexed using 
arbitrary ranges.

e.g. A = MyArray(Int, 3:8) would define a 6-element vector with indexes 
ranging from 3 to 8, rather than the default 1 to 6.

I've made some progress, but am now stuck on how to handle colon indexing.

A[4:6] works by defining appropriate getindex and setindex!

e.g.  setindex!{T,S<:Real}(A::MyArray{T,1}, value, I::AbstractVector{S}) = 
...

but A[:] = 0 seems to get translated to A[1:6] before dispatch on setindex!, 
so I can't hijack the call.

>From subarray.jl, the code below suggests I can specialise on the Colon 
type, but this doesn't seem to work for me. Colon appears to be converted 
to UnitRange *before* calling setindex!

sub(A::AbstractArray, I::Union(RangeIndex, Colon)...) = sub(A, ntuple(length
(I), i-> isa(I[i], Colon) ? (1:size(A,i)) : I[i])...)


Is there a way around this?
Should I be able to specialise on the colon argument?

-- Greg


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
Adding your loop increased performance by another 20%. Good stuff!
Changes uploaded to Github.

Cheers,
Daniel.

On 20 September 2015 at 23:13, Kristoffer Carlsson 
wrote:

> For me, your latest changes made the time go from 0.13 -> 0.11. It is
> strange we have so different performances, but then again 0.3 and 0.4 are
> different beasts.
>
> Adding some calls to slice and another loop gained some perf for me. Can
> you try:
>
> https://gist.github.com/KristofferC/8a8ff33cb186183eea8d
>
> On Sunday, September 20, 2015 at 9:36:20 PM UTC+2, Daniel Carrera wrote:
>>
>> Just another note:
>>
>> I suspect that the `reshape()` might be the guilty party. I am just
>> guessing here, but I suspect that the reshape() forces a memory copy, while
>> a regular slice just creates kind of symlink to the original data.
>> Furthermore, I suspect that the memory copy would mean that when you try to
>> read from the newly created variable, you have to fetch it from RAM,
>> despite the fact that the CPU cache already has a perfectly good copy of
>> the same data.
>>
>> Cheers,
>> Daniel.
>>
>>
>> On 20 September 2015 at 21:25, Daniel Carrera  wrote:
>>
>>> Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using
>>> Profile.print() I found that the hottest parts of the code appeared to be
>>> the if-conditions, such as:
>>>
>>> if o < b_hist[j,3]
>>>
>>> It occurred to me that this could be due to cache misses, so I rewrote
>>> the code to store the data more compactly:
>>>
>>> -  b_hist = reshape(sim.b[d, 1:t, i, :], t, 3)
>>> +  b_hist_1 = sim.b[d, 1:t, i, 1]
>>> +  b_hist_3 = sim.b[d, 1:t, i, 3]
>>> ...
>>> -if o < b_hist[j,3]
>>> +if o < b_hist_3[j]
>>>
>>>
>>> So, instead of an 3xN array, I store two 1xN arrays with the data I
>>> actually want. I suspect that the biggest improvement is not that there is
>>> 1/3 less data, but that the data just gets managed differently. The upshot
>>> is that now the program runs 208 times faster for me than it did initially.
>>> For me time execution time went from 45s to 0.2s.
>>>
>>> As always, the code is updated on Github:
>>>
>>> https://github.com/dcarrera/sim
>>>
>>> Cheers,
>>> Daniel.
>>>
>>>
>>>
>>> On 20 September 2015 at 20:51, Seth  wrote:
>>>
 As an interim step, you can also get text profiling information using
 Profile.print() if the graphics aren't working.

 On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera
 wrote:
>
> Hmm... ProfileView gives me an error:
>
> ERROR: panzoom not defined
>  in view at
> /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>  in include at ./boot.jl:245
>  in include_from_node1 at ./loading.jl:128
> while loading /home/daniel/Projects/optimization/run_sim.jl, in
> expression starting on line 55
>
> Do I need to update something?
>
> Cheers,
> Daniel.
>
> On 20 September 2015 at 20:28, Kristoffer Carlsson  > wrote:
>
>> https://github.com/timholy/ProfileView.jl is invaluable for
>> performance tweaking.
>>
>> Are you on 0.4?
>>
>> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan
>> Bouchet-Valat wrote:
>>>
>>> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit
>>> :
>>> >
>>> >
>>> > On 20 September 2015 at 19:43, Kristoffer Carlsson <
>>> > kcarl...@gmail.com> wrote:
>>> > > Did you run the code twice to not time the JIT compiler?
>>> > >
>>> > > For me, my version runs in 0.24 and Daniels in 0.34.
>>> > >
>>> > > Anyway, adding this to Daniels version:
>>> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes
>>> it
>>> > > run in 0.13 seconds for me.
>>> > >
>>> > >
>>> >
>>> > Interesting. For me that change only makes a 10-20% improvement.
>>> On
>>> > my laptop the program takes about 1.5s which is similar to Adam's.
>>> So
>>> > I guess we are running on similar hardware and you are probably
>>> using
>>> > a faster desktop. In any case, I added the change and updated the
>>> > repository:
>>> >
>>> > https://github.com/dcarrera/sim
>>> >
>>> > Is there a good way to profile Julia code? So I have been
>>> profiling
>>> > by inserting tic() and toc() lines everywhere. On my computer
>>> > @profile seems to do the same thing as @time, so it's kind of
>>> useless
>>> > if I want to find the hot spots in a program.
>>> Sure :
>>> http://julia.readthedocs.org/en/latest/manual/profile/
>>>
>>>
>>> Regards
>>>
>>
>
>>>
>>


[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread Serge Santos
Hi All,

I tried to roll back to JuliaParser v0.6.2 and it didn't work. 

If someone still manages to successfully run Juno with Julia 0.3.11, can 
you please send the list of packages with version numbers that does not 
create any issues with Juno (i.e,, output from Pkg.status()). I was not 
able to figure out what combination of versions work. 

Many thanks in advance
Serge

On Monday, 21 September 2015 00:16:27 UTC+1, Greg Plowman wrote:
>
> Hi All,
>
> On 2 different PCs where Juno works (almost without error) Pkg.status() 
> reports JuliaParser v0.6.2
> On PC that has Juno errors, Pkg.status() reports JuliaParser v0.6.3
> Rolling back to JuliaParser v0.1.2 creates different errors.
> So it seems we need to revert to JuliaParser v0.6.2
>
> I'm not at a PC where I can see if we can pin v0.6.2, in light of the 
> following:
>
> Before this JuliaParser was at version v0.6.3, are you sure we should try 
>>> reverting to v0.1.2?
>>
>> See the tagged versions 
>
> https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the 
>> next latest tagged version. You could probably checkout a specific commit 
>> prior to the commit that’s causing the breakage instead though.
>
> Also, I don't want to play around with Pkg.ANYTHING on a working 
> configuration at the moment :)
>
> -- Greg
>
>
> On Monday, September 21, 2015 at 4:50:37 AM UTC+10, Tony Kelman wrote:
>
>> What's temporarily broken here is some of the packages that 
>> Light-Table-based Juno relies on to work. In the meantime you can still use 
>> command-line REPL Julia, and while it's not the most friendly interface 
>> your code will still run. Your estimation of the Julia ecosystem's 
>> robustness is pretty accurate though, if you really want to ensure things 
>> stay working the best way of doing that right now is keeping all packages 
>> pinned until you have a chance to thoroughly test the versions that an 
>> upgrade would give you. We plan on automating some of this testing going 
>> forward, though in the case of Juno much of the code is being replaced 
>> right now and the replacements aren't totally ready just yet.
>>
>>
>> On Sunday, September 20, 2015 at 10:45:01 AM UTC-7, Serge Santos wrote:
>>>
>>> Thank you all for your inputs. It tried your suggestions and, 
>>> unfortunately, it does not work. I tried Atom but, after a good start and 
>>> some success, it keeps crashing in middle of a calculation (windows 10).
>>>
>>> To summarize what I tried with Juno and julia 0.3.11:
>>> - Compat v.0.7.0 (pinned)
>>> - JuliaParser V0.1.2  (pinned)
>>> - Jewel v1.0.6.
>>>
>>> I get a first error message, which seems to indicate that Julia cannot 
>>> generate an output.
>>>
>>> *symbol could not be found jl_generating_output (-1): The specified 
>>> procedure could not be found.*
>>>
>>> Followed by:
>>>
>>> *WARNING: LightTable.jl: `skipws` has no method matching 
>>> skipws(::TokenStream)*
>>> * in scopes at C:\Users\Serge\.julia\v0.3\Jewel\src\parse\scope.jl:148*
>>> * in codemodule at 
>>> C:\Users\Serge\.julia\v0.3\Jewel\src\parse/parse.jl:141*
>>> * in getmodule at C:\Users\Serge\.julia\v0.3\Jewel\src\eval.jl:42*
>>> * in anonymous at 
>>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable\eval.jl:51*
>>> * in handlecmd at 
>>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:65*
>>> * in handlenext at 
>>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:81*
>>> * in server at 
>>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:22*
>>> * in server at C:\Users\Serge\.julia\v0.3\Jewel\src\Jewel.jl:18*
>>> * in include at boot.jl:245*
>>> * in include_from_node1 at loading.jl:128*
>>> * in process_options at client.jl:285*
>>> * in _start at client.jl:354*
>>>
>>> Looking at dependencies with MetadataTools, I smply got:* nothing.* I 
>>> assume that Jewel does not have any dependencies.
>>>
>>> I have a lot of understanding for the effort that goes into making the 
>>> Julia project work and a success, but I just lost two days of my life 
>>> trying to make things work and I have an important deadline ahead that I am 
>>> likely to miss because I relied on a promising tool that, unfortunately, 
>>> does not seem robust enough at this stage given the amount of development 
>>> happening.It makes me wonder if I should not wait until Julia becomes more 
>>> established and robust and switch to other solutions.
>>>
>>> On Sunday, 20 September 2015 16:47:05 UTC+1, Dongning Guo wrote:

 In case you're stuck, this may be a way out:
 I installed Atom editor and it seems Julia (v0.5??? nightly build)
 works with it after installing a few packages.  I'm learning to use
 the new environment ...
 See https://github.com/JunoLab/atom-julia-client/tree/master/manual


 On Sunday, September 20, 2015 at 6:59:02 AM UTC-5, Michael Hatherly 
 wrote:
>
> I can’t see LightTable listed in Pkg.status() output in either PC
>
> The LightTable module is part of the 

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Valentin Churavy
Ah ja @code_warntype is in Julia v0.4. I thin in 0.3 you could just do 
@code_type and look for ::Any 

See the introduction PR https://github.com/JuliaLang/julia/pull/9349

On Sunday, 20 September 2015 23:49:55 UTC+9, Daniel Carrera wrote:
>
> Uhmm... I get an error:
>
> ERROR: @code_warntype not defined
>
> Do I need to update Julia or something? I have version 0.3.11.
>
> On 20 September 2015 at 16:14, Valentin Churavy  > wrote:
>
>> take a look at 
>> @code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict{String,
>> Float64}())
>>
>> It tells you where the compiler has problems inferring the types of the 
>> variables.
>>
>> Problematic in this case is
>>   b_hist::Any
>>   b_hist_col2::Any
>>   numB::Any
>>   b_hist_col2_A::Any
>>   b_hist_col2_B::Any
>>   total_b_A_::Any
>>   total_b_B_::Any
>>   net_::Any
>>
>>
>> On Sunday, 20 September 2015 22:55:50 UTC+9, Daniel Carrera wrote:
>>>
>>> Hi Steven,
>>>
>>> I am not the OP, I am trying to help the OP with his code. Anyway, the 
>>> first thing I did was replace Dict{Any,Any} by the more explicit 
>>> Dict{String,Float64} but that didn't help. I did not think to try a 
>>> composite type. I might try that later. It would be interesting to figure 
>>> out why the OP's code is so much slower in Julia.
>>>
>>> Cheers,
>>> Daniel.
>>>
>>>
>>> On 20 September 2015 at 15:20, Steven G. Johnson  
>>> wrote:
>>>
 Daniel, you are still using a Dict of params, which kills type 
 inference. Pass parameters directly or put them in (typed) fields of a 
 composite type.

 (On the other hand, common misconception: there is no performance need 
 to declare the types of function arguments.)
>>>
>>>
>>>
>

[julia-users] Re: Julia 0.4 RC ppa?

2015-09-20 Thread Tony Kelman
Versioning the julia package name in the ppa would be a very good idea. The 
only reason the PPA is often out of date is that it's entirely maintained 
by a single person who doesn't always have the time to fix things that 
break or update things that would usually be handled automatically. As I 
said it takes more maintenance to keep running than the tarball builds, and 
since the PPA is Ubuntu-specific we've been encouraging people to use the 
generic tarballs now since we have more control over dependencies, public 
visibility to any issues that arise, and the ability for multiple people to 
fix them. I recognize the utility in having your system package manager 
handle updates, but it's a fair bit more maintenance work. Downloading and 
installing a tarball to use the binaries of Julia should be pretty easy, 
and doesn't need root access either.


On Sunday, September 20, 2015 at 8:37:57 AM UTC-7, Glen O wrote:
>
> Is there a reason why the juliareleases ppa couldn't provide a julia0.4 
> package, separately from the currently julia package? I've seen similar 
> things done with packages elsewhere, including within the main ubuntu 
> repositories. Indeed, given the changes happening to the language, perhaps 
> it's a good idea to start keeping major versions of julia separate (that 
> is, make it julia0.3 and julia0.4, with julia being a dependency package 
> that will pull in the latest stable julia (ie/ it will point to julia0.3 
> until julia0.4 is properly released, then it will point to julia0.4).
>
> This also minimises issues for people who might have julia 0.3 currently 
> installed and are actively using it, and don't want to accidentally update 
> to 0.4 and have to alter all of their code to account for changes in the 
> language - they would just remove the dependency package, and be guaranteed 
> to remain with julia0.3 only.
>
> I do understand why it might be considered too much of a nuisance for the 
> relatively short RC period, when we can wait for the proper release, but 
> I'm probably not the only person who isn't up to using an in-development 
> version (nightlies), but is willing to use one that might just be slightly 
> buggy (release candidate), and who doesn't want to fiddle with installation 
> or compilation.
>
> On Monday, 21 September 2015 00:47:10 UTC+10, Tony Kelman wrote:
>>
>> Actually it would be expected for julianightlies to be providing 0.5-dev 
>> nightlies right now but it's been failing to update for some time due to 
>> build system changes on master. We have more flexibility and control over 
>> the linux tarball binaries than we do over the ppa. I don't think the ppa 
>> has any effective mechanism to provide release candidates right now.
>
>

[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Steven G. Johnson
Daniel, you are still using a Dict of params, which kills type inference. Pass 
parameters directly or put them in (typed) fields of a composite type. 

(On the other hand, common misconception: there is no performance need to 
declare the types of function arguments.)

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Adam
Thanks for the several comments and Daniel for the alternate versions of 
the calc_net function! Viral, unfortunately I'm not a GitHub/version 
control user (yet), but I've copied the code into a gist here: 
https://gist.github.com/anonymous/cee196ee43cb9bf1c8b6. The code can be run 
by running "run_sim.jl". 

On top of the changes I described in my last post, I made changes based on 
Daniel's two posts (including his re-written function, thanks!). Basically, 
this amounted to replacing single-line array manipulation (e.g., with 
".==") with for loops. Daniel, can you clarify your comment of "the first 
two lines require memory allocation and might also have a bad memory 
profile"? I'm not sure if it's addressed in this latest gist or not. 

I ran this version of the code on my laptop, and got the following:

elapsed time: 3.473756352 seconds (713768968 bytes allocated, 15.06% gc 
time)
elapsed time: 2.5804882 seconds (673465152 bytes allocated, 20.63% gc time)
elapsed time: 2.579725004 seconds (673465152 bytes allocated, 19.07% gc 
time)

This is an improvement! With that said, the Julia code is still 1.8x to 2x 
slower than the Matlab code. Any tips on additional changes I can make so I 
can (greatly) outperform the Matlab code?

Lastly, since this is my first post to the group, it seems my messages and 
replies need to be moderated and approved, so there's some delay in my 
replies. Hasn't been a huge issue thus far, but just an FYI. 

On Sunday, September 20, 2015 at 9:14:22 AM UTC-5, Valentin Churavy wrote:
>
> take a look at 
> @code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict{String,
> Float64}())
>
> It tells you where the compiler has problems inferring the types of the 
> variables.
>
> Problematic in this case is
>   b_hist::Any
>   b_hist_col2::Any
>   numB::Any
>   b_hist_col2_A::Any
>   b_hist_col2_B::Any
>   total_b_A_::Any
>   total_b_B_::Any
>   net_::Any
>
>
> On Sunday, 20 September 2015 22:55:50 UTC+9, Daniel Carrera wrote:
>>
>> Hi Steven,
>>
>> I am not the OP, I am trying to help the OP with his code. Anyway, the 
>> first thing I did was replace Dict{Any,Any} by the more explicit 
>> Dict{String,Float64} but that didn't help. I did not think to try a 
>> composite type. I might try that later. It would be interesting to figure 
>> out why the OP's code is so much slower in Julia.
>>
>> Cheers,
>> Daniel.
>>
>>
>> On 20 September 2015 at 15:20, Steven G. Johnson  
>> wrote:
>>
>>> Daniel, you are still using a Dict of params, which kills type 
>>> inference. Pass parameters directly or put them in (typed) fields of a 
>>> composite type.
>>>
>>> (On the other hand, common misconception: there is no performance need 
>>> to declare the types of function arguments.)
>>
>>
>>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
Whoo hoo!

After replacing all the Dict's with appropriate composite types, I got an
additional ~4x speed improvement. Combined with my previous work, the code
is now 30x faster than the original. So now Julia should at least match
Matlab. I uploaded the modified code to GitHub:

https://github.com/dcarrera/sim

Cheers,
Daniel.


On 20 September 2015 at 16:08, Tim Holy  wrote:

> String is not a concrete type. Consider ASCIIString or UTF8String.
>
> But if you don't need the flexibility of a Dict, a composite type will be a
> huge improvement.
>
> --Tim
>
> On Sunday, September 20, 2015 03:55:43 PM Daniel Carrera wrote:
> > Hi Steven,
> >
> > I am not the OP, I am trying to help the OP with his code. Anyway, the
> > first thing I did was replace Dict{Any,Any} by the more explicit
> > Dict{String,Float64} but that didn't help. I did not think to try a
> > composite type. I might try that later. It would be interesting to figure
> > out why the OP's code is so much slower in Julia.
> >
> > Cheers,
> > Daniel.
> >
> >
> > On 20 September 2015 at 15:20, Steven G. Johnson 
> >
> > wrote:
> > > Daniel, you are still using a Dict of params, which kills type
> inference.
> > > Pass parameters directly or put them in (typed) fields of a composite
> > > type.
> > >
> > > (On the other hand, common misconception: there is no performance need
> to
> > > declare the types of function arguments.)
>
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
On 20 September 2015 at 17:08, Adam  wrote:

> Daniel, can you clarify your comment of "the first two lines require
> memory allocation and might also have a bad memory profile"? I'm not sure
> if it's addressed in this latest gist or not.
>

I don't know how much you know about computers, so forgive me if I end up
telling you things you already know:

In modern computer architectures, CPUs are extremely fast compared to RAM.
Often the CPU spends most of the time waiting for data to arrive. When your
program needs data, it first tries to get it from the local CPU cache. If
the data is not in cache, it has to get it from RAM and the CPU has to
wait. Often, the absolute best way to optimize a program is to minimize
cache misses. The way to do that is to access data in the same order that
it is stored in memory. Re-arranging memory is usually bad, because it
requires copying memory and often requires accessing data out-of-order,
which leads to cache misses.

This is an improvement! With that said, the Julia code is still 1.8x to 2x
> slower than the Matlab code. Any tips on additional changes I can make so I
> can (greatly) outperform the Matlab code?
>

Have a look at the new version I posted. I got an additional 4x improvement
by removing all the Dict's. Basically, the Dict's forced Julia's compiler
to produce very generic code. Using concrete types allowed the compiler to
optimize the code better.

Cheers,
Daniel.


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread STAR0SS
The biggest problem right now is that Prob_o is global in calc_net. You 
need to pass it as an argument too. It's one of the drawback of having 
everything global by default, this kind of mistakes are sometimes hard to 
spot.

Otherwise the  # get things in right dimensions for calculation below at 
line 55 is not necessary anymore.

In these zeros(numO, 1) you don't need to put a 1, zeros(numO) gives you a 
vector of length numO, unlike in matlab where is gives you a matrix.
 


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
On 20 September 2015 at 17:39, STAR0SS  wrote:

> The biggest problem right now is that Prob_o is global in calc_net. You
> need to pass it as an argument too. It's one of the drawback of having
> everything global by default, this kind of mistakes are sometimes hard to
> spot.
>

But... calc_net does not use Prob_o ... ?



>
> Otherwise the  # get things in right dimensions for calculation below at
> line 55 is not necessary anymore.
>
> In these zeros(numO, 1) you don't need to put a 1, zeros(numO) gives you
> a vector of length numO, unlike in matlab where is gives you a matrix.
>
>

I cleaned up the code and updated Github. No speed difference though.

Cheers,
Daniel.


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Valentin Churavy
take a look at 
@code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict{String,Float64
}())

It tells you where the compiler has problems inferring the types of the 
variables.

Problematic in this case is
  b_hist::Any
  b_hist_col2::Any
  numB::Any
  b_hist_col2_A::Any
  b_hist_col2_B::Any
  total_b_A_::Any
  total_b_B_::Any
  net_::Any


On Sunday, 20 September 2015 22:55:50 UTC+9, Daniel Carrera wrote:
>
> Hi Steven,
>
> I am not the OP, I am trying to help the OP with his code. Anyway, the 
> first thing I did was replace Dict{Any,Any} by the more explicit 
> Dict{String,Float64} but that didn't help. I did not think to try a 
> composite type. I might try that later. It would be interesting to figure 
> out why the OP's code is so much slower in Julia.
>
> Cheers,
> Daniel.
>
>
> On 20 September 2015 at 15:20, Steven G. Johnson  > wrote:
>
>> Daniel, you are still using a Dict of params, which kills type inference. 
>> Pass parameters directly or put them in (typed) fields of a composite type.
>>
>> (On the other hand, common misconception: there is no performance need to 
>> declare the types of function arguments.)
>
>
>

[julia-users] Slowness when trying to redirect the standard output

2015-09-20 Thread STAR0SS
I'm trying redirect the standard output into another variable, here a 
GtkTextBuffer, but it's pretty slow for
a reason I don't understand. I don't fully understand tasks so maybe 
there's a scope or compilation problem, I don't know. 

Using @time is particularly bad for some reason, it take about two second 
to complete.

Inserting into a buffer is fast by the way, even if it's a global variable. 
Any hints?


using Gtk


function redirect()

buffer = @GtkTextBuffer()

function print_std_out(rd::Base.PipeEndpoint,buffer::GtkTextBuffer)
response = readavailable(rd)
if !isempty(response)
insert!(buffer,bytestring(response))
end
end

stdout_task = @schedule begin
rd, wr = redirect_stdout()
while(true)
print_std_out(rd,buffer)
end
end

end

redirect()

@time 1


[julia-users] Julia 0.4 RC ppa?

2015-09-20 Thread Tony Kelman
Actually it would be expected for julianightlies to be providing 0.5-dev 
nightlies right now but it's been failing to update for some time due to build 
system changes on master. We have more flexibility and control over the linux 
tarball binaries than we do over the ppa. I don't think the ppa has any 
effective mechanism to provide release candidates right now.

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Kristoffer Carlsson
This runs in 0.3 
seconds: 
https://github.com/KristofferC/calcnet/commit/2c252a31c34eb92842310b31d753a64727b95875
 
but I havent checked if the result is the same hehe. Should give you a feel 
how to write fast code in Julia though.

Bottleneck now is all the repmats.

On Sunday, September 20, 2015 at 4:49:55 PM UTC+2, Daniel Carrera wrote:
>
> Uhmm... I get an error:
>
> ERROR: @code_warntype not defined
>
> Do I need to update Julia or something? I have version 0.3.11.
>
> On 20 September 2015 at 16:14, Valentin Churavy  > wrote:
>
>> take a look at 
>> @code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict{String,
>> Float64}())
>>
>> It tells you where the compiler has problems inferring the types of the 
>> variables.
>>
>> Problematic in this case is
>>   b_hist::Any
>>   b_hist_col2::Any
>>   numB::Any
>>   b_hist_col2_A::Any
>>   b_hist_col2_B::Any
>>   total_b_A_::Any
>>   total_b_B_::Any
>>   net_::Any
>>
>>
>> On Sunday, 20 September 2015 22:55:50 UTC+9, Daniel Carrera wrote:
>>>
>>> Hi Steven,
>>>
>>> I am not the OP, I am trying to help the OP with his code. Anyway, the 
>>> first thing I did was replace Dict{Any,Any} by the more explicit 
>>> Dict{String,Float64} but that didn't help. I did not think to try a 
>>> composite type. I might try that later. It would be interesting to figure 
>>> out why the OP's code is so much slower in Julia.
>>>
>>> Cheers,
>>> Daniel.
>>>
>>>
>>> On 20 September 2015 at 15:20, Steven G. Johnson  
>>> wrote:
>>>
 Daniel, you are still using a Dict of params, which kills type 
 inference. Pass parameters directly or put them in (typed) fields of a 
 composite type.

 (On the other hand, common misconception: there is no performance need 
 to declare the types of function arguments.)
>>>
>>>
>>>
>

Re: [julia-users] Re: What does the `|>` operator do? (possibly a Gtk.jl question)

2015-09-20 Thread Daniel Carrera
Ha!

That is really cool. Thanks!

On 20 September 2015 at 11:19, STAR0SS  wrote:

> From the help:
>
> help?> |>
> search: |>
>
> ..  |>(x, f)
>
> Applies a function to the preceding argument. This allows for easy
> function chaining.
>
> .. doctest::
>
> julia> [1:5;] |> x->x.^2 |> sum |> inv
> 0.01818181818181818
>
> The implementation is quite simple:
>
> |>(x, f) = f(x)
>
> https://github.com/JuliaLang/julia/blob/master/base/operators.jl#L198
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
Uhmm... I get an error:

ERROR: @code_warntype not defined

Do I need to update Julia or something? I have version 0.3.11.

On 20 September 2015 at 16:14, Valentin Churavy  wrote:

> take a look at
> @code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict{String,
> Float64}())
>
> It tells you where the compiler has problems inferring the types of the
> variables.
>
> Problematic in this case is
>   b_hist::Any
>   b_hist_col2::Any
>   numB::Any
>   b_hist_col2_A::Any
>   b_hist_col2_B::Any
>   total_b_A_::Any
>   total_b_B_::Any
>   net_::Any
>
>
> On Sunday, 20 September 2015 22:55:50 UTC+9, Daniel Carrera wrote:
>>
>> Hi Steven,
>>
>> I am not the OP, I am trying to help the OP with his code. Anyway, the
>> first thing I did was replace Dict{Any,Any} by the more explicit
>> Dict{String,Float64} but that didn't help. I did not think to try a
>> composite type. I might try that later. It would be interesting to figure
>> out why the OP's code is so much slower in Julia.
>>
>> Cheers,
>> Daniel.
>>
>>
>> On 20 September 2015 at 15:20, Steven G. Johnson 
>> wrote:
>>
>>> Daniel, you are still using a Dict of params, which kills type
>>> inference. Pass parameters directly or put them in (typed) fields of a
>>> composite type.
>>>
>>> (On the other hand, common misconception: there is no performance need
>>> to declare the types of function arguments.)
>>
>>
>>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Stefan Karpinski
Only your first post should be moderated.

> On Sep 20, 2015, at 11:08 AM, Adam  wrote:
> 
> Thanks for the several comments and Daniel for the alternate versions of the 
> calc_net function! Viral, unfortunately I'm not a GitHub/version control user 
> (yet), but I've copied the code into a gist here: 
> https://gist.github.com/anonymous/cee196ee43cb9bf1c8b6. The code can be run 
> by running "run_sim.jl". 
> 
> On top of the changes I described in my last post, I made changes based on 
> Daniel's two posts (including his re-written function, thanks!). Basically, 
> this amounted to replacing single-line array manipulation (e.g., with ".==") 
> with for loops. Daniel, can you clarify your comment of "the first two lines 
> require memory allocation and might also have a bad memory profile"? I'm not 
> sure if it's addressed in this latest gist or not. 
> 
> I ran this version of the code on my laptop, and got the following:
> 
> elapsed time: 3.473756352 seconds (713768968 bytes allocated, 15.06% gc time)
> elapsed time: 2.5804882 seconds (673465152 bytes allocated, 20.63% gc time)
> elapsed time: 2.579725004 seconds (673465152 bytes allocated, 19.07% gc time)
> 
> This is an improvement! With that said, the Julia code is still 1.8x to 2x 
> slower than the Matlab code. Any tips on additional changes I can make so I 
> can (greatly) outperform the Matlab code?
> 
> Lastly, since this is my first post to the group, it seems my messages and 
> replies need to be moderated and approved, so there's some delay in my 
> replies. Hasn't been a huge issue thus far, but just an FYI. 
> 
>> On Sunday, September 20, 2015 at 9:14:22 AM UTC-5, Valentin Churavy wrote:
>> take a look at 
>> @code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), 
>> Dict{String,Float64}())
>> 
>> It tells you where the compiler has problems inferring the types of the 
>> variables.
>> 
>> Problematic in this case is
>>   b_hist::Any
>>   b_hist_col2::Any
>>   numB::Any
>>   b_hist_col2_A::Any
>>   b_hist_col2_B::Any
>>   total_b_A_::Any
>>   total_b_B_::Any
>>   net_::Any
>> 
>> 
>>> On Sunday, 20 September 2015 22:55:50 UTC+9, Daniel Carrera wrote:
>>> Hi Steven,
>>> 
>>> I am not the OP, I am trying to help the OP with his code. Anyway, the 
>>> first thing I did was replace Dict{Any,Any} by the more explicit 
>>> Dict{String,Float64} but that didn't help. I did not think to try a 
>>> composite type. I might try that later. It would be interesting to figure 
>>> out why the OP's code is so much slower in Julia.
>>> 
>>> Cheers,
>>> Daniel.
>>> 
>>> 
 On 20 September 2015 at 15:20, Steven G. Johnson  
 wrote:
 Daniel, you are still using a Dict of params, which kills type inference. 
 Pass parameters directly or put them in (typed) fields of a composite type.
 
 (On the other hand, common misconception: there is no performance need to 
 declare the types of function arguments.)
>>> 


[julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
I managed to get another 2.5x improvement with this function:

function calc_net( d::Int64, t::Int64, i::Int64, sim::Dict, param::Dict )

  # name some things
  gamma = param["gamma"]
  o_vec = param["o_vec"]
  numO = length(o_vec)
  w0 = sim["w"][d, 1, i]

  # retrieve some history
  b_hist = reshape(sim["b"][d, 1:t, i, :], t, 3)
  b_hist_col2 = reshape(sim["b"][d, 1:t, i, 2], t, 1)

  numB = length(b_hist[:,1])
  b_hist_col2_A = zeros(numB,1)
  b_hist_col2_B = zeros(numB,1)
  for j = 1:numB
  b_hist_col2_A[j] = b_hist_col2[j] * (b_hist[j,1] == 65)
  b_hist_col2_B[j] = b_hist_col2[j] * (b_hist[j,1] == 66)
  end

  # ---
  # sum some stuff according to some conditions
  b_A_su_ = zeros(numO, 1)
  b_B_su_ = zeros(numO, 1)
  b_A_pu_ = zeros(numO, 1)
  b_B_pu_ = zeros(numO, 1)

  for idx = 1:numO

o = o_vec[idx]

for j in 1:numB
if o < b_hist[j,3]
if b_hist[j,1] == 65
b_A_su_[idx] += b_hist_col2[j]
end
elseif o > b_hist[j,3]
if b_hist[j,1] == 66
b_B_su_[idx] += b_hist_col2[j]
end
else
if b_hist[j,1] == 65
b_A_pu_[idx] += b_hist_col2[j]
end
if b_hist[j,1] == 66
b_B_pu_[idx] += b_hist_col2[j]
end
end
end
  end # idx
  # ---

  # get things in right dimensions for calculation below
  w0_ = repmat([w0], numO, 1)
  total_b_A_ = repmat([sum(b_hist_col2_A)], numO, 1)
  total_b_B_ = repmat([sum(b_hist_col2_B)], numO, 1)

  # calculate net
  net_ = w0_ - total_b_A_ - total_b_B_ + b_A_pu_ + b_B_pu_ +
(1 + gamma) .* (b_A_su_ + b_B_su_)

  # temporarily impose minimum on values of net_
  net_ = max(net_, 0.1)

  # return output
  return net_

end


The most significant change is that I replaced the boolean operations with 
if-statements. This saves type conversion and possible a bit of memory 
allocation. AFAICT the main loop is as efficient as it can be, and this 
loop seems to dominate the execution time. So I'm not sure what other 
strategies are needed to realize the expected performance from Julia.

Cheers,
Daniel.


On Saturday, 19 September 2015 19:50:50 UTC+2, Adam wrote:
>
> Hi, I'm a novice to Julia but have heard promising things and wanted to 
> see if the language can help with a problem I'm working on. I have some 
> Matlab code with some user-defined functions that runs a simulation in 
> about ~1.4 seconds in Matlab (for a certain set of parameters that 
> characterize a small problem instance). I translated the code into Julia, 
> and to my surprise the Julia code runs 5x to 30x slower than the Matlab 
> code. I'll be running this code on much larger problem instances many, many 
> times (within some other loops), so performance is important here. 
>
> I created a GitHub gist that contains a stripped-down version of the Julia 
> code that gets as close to (as I can find) the culprit of the problem. The 
> gist is here: https://gist.github.com/anonymous/010bcbda091381b0de9e. A 
> quick description: 
>
>- set_up.jl sets up parameters and other items.
>- set_up_sim.jl sets up items particular to the simulation.
>- simulation.jl runs the simulation.
>- calc_net.jl, dist_o_i.jl, and update_w.jl are user-defined functions 
>executed in the simulation. 
>
>
> On my laptop (running in Juno with Julia version 0.3.10), this code yields:
> elapsed time: 43.269609577 seconds (20297989440 bytes allocated, 38.77% gc 
> time)
> elapsed time: 38.500054653 seconds (20291872804 bytes allocated, 40.41% gc 
> time)
> elapsed time: 40.238907235 seconds (20291869252 bytes allocated, 39.44% gc 
> time)
>
> Why is so much memory used, and why is so much time spent in garbage 
> collection?
>
> I'm familiar with 
> http://docs.julialang.org/en/release-0.3/manual/performance-tips/ and 
> have tried to follow these tips to the best of my knowledge. One example of 
> what might be seen as low-hanging fruit: I tried removing the type 
> declarations from my functions, but this actually increased the run-time of 
> the code by a few seconds. Also, other permutations of the column orders 
> pertaining to D, T, and I led to slower performance. 
>
> I'm sure there are several issues at play here-- I'm just using Julia for 
> the first time. Any tips would be greatly appreciated. 
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
Hi Steven,

I am not the OP, I am trying to help the OP with his code. Anyway, the
first thing I did was replace Dict{Any,Any} by the more explicit
Dict{String,Float64} but that didn't help. I did not think to try a
composite type. I might try that later. It would be interesting to figure
out why the OP's code is so much slower in Julia.

Cheers,
Daniel.


On 20 September 2015 at 15:20, Steven G. Johnson 
wrote:

> Daniel, you are still using a Dict of params, which kills type inference.
> Pass parameters directly or put them in (typed) fields of a composite type.
>
> (On the other hand, common misconception: there is no performance need to
> declare the types of function arguments.)


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
That's very useful. I didn't know about @code_warntype. I'm going to try to
replace all the Dict's with concrete types and see what happens.

On 20 September 2015 at 16:14, Valentin Churavy  wrote:

> take a look at
> @code_warntype calc_net(0, 0, 0, Dict{String,Float64}(), Dict{String,
> Float64}())
>
> It tells you where the compiler has problems inferring the types of the
> variables.
>
> Problematic in this case is
>   b_hist::Any
>   b_hist_col2::Any
>   numB::Any
>   b_hist_col2_A::Any
>   b_hist_col2_B::Any
>   total_b_A_::Any
>   total_b_B_::Any
>   net_::Any
>
>
> On Sunday, 20 September 2015 22:55:50 UTC+9, Daniel Carrera wrote:
>>
>> Hi Steven,
>>
>> I am not the OP, I am trying to help the OP with his code. Anyway, the
>> first thing I did was replace Dict{Any,Any} by the more explicit
>> Dict{String,Float64} but that didn't help. I did not think to try a
>> composite type. I might try that later. It would be interesting to figure
>> out why the OP's code is so much slower in Julia.
>>
>> Cheers,
>> Daniel.
>>
>>
>> On 20 September 2015 at 15:20, Steven G. Johnson 
>> wrote:
>>
>>> Daniel, you are still using a Dict of params, which kills type
>>> inference. Pass parameters directly or put them in (typed) fields of a
>>> composite type.
>>>
>>> (On the other hand, common misconception: there is no performance need
>>> to declare the types of function arguments.)
>>
>>
>>


[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread 'Greg Plowman' via julia-users
OK I see that second latest tag is v0.1.2 (17 June 2014). Seems a strange 
jump.

But now I understand pinning, I can use a strategy of rolling back 
Juno-related packages until Juno works again.

What other packages would Juno depend on?

To help me in this endeavour, I have access to another PC on which Juno 
runs (almost) without error.
Confusingly, Pkg.status() reports JuliaParser v0.6.2 on this second PC
Jewel is v1.0.6 on both PCs.
I can't see LightTable listed in Pkg.status() output in either PC

I think Compat v0.7.2 is also causing ERROR: @doc not defined issue (
https://groups.google.com/forum/#!topic/julia-users/rsM4hxdkAxg)
so maybe reverting back to Compat v0.7.0 might also help.

-- Greg


On Sunday, September 20, 2015 at 7:08:47 PM UTC+10, Michael Hatherly wrote:

> Before this JuliaParser was at version v0.6.3, are you sure we should try 
> reverting to v0.1.2?
>
> See the tagged versions 
> https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the 
> next latest tagged version. You could probably checkout a specific commit 
> prior to the commit that’s causing the breakage instead though.
>
> What version of Jewel.jl and LightTable.jl are you using?
>
> — Mike
> ​
>
> On Sunday, 20 September 2015 10:56:22 UTC+2, Greg Plowman wrote:
>>
>> Hi,
>>
>> I tried Pkg.pin("JuliaParser", v"0.1.2") but now I get the following 
>> error (multiple times).
>>
>> Before this JuliaParser was at version v0.6.3, are you sure we should try 
>> reverting to v0.1.2?
>>
>>
>> WARNING: LightTable.jl: `skipws` has no method matching skipws(::
>> TokenStream)
>>  in scopes at C:\Users\Greg\.julia\v0.3\Jewel\src\parse\scope.jl:148
>>  in codemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\parse/parse.jl:141
>>  in filemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\module.jl:93
>>  in anonymous at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable\misc.jl:5
>>  in handlecmd at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/
>> LightTable.jl:65
>>  in handlenext at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/
>> LightTable.jl:81
>>  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.
>> jl:22
>>  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\Jewel.jl:18
>>  in include at boot.jl:245
>>  in include_from_node1 at loading.jl:128
>>  in process_options at client.jl:285
>>  in _start at client.jl:354
>>
>>
>>
>> Any other suggestions?
>>
>> --Greg
>>
>>
>> On Sunday, September 20, 2015 at 6:16:10 PM UTC+10, Michael Hatherly 
>> wrote:
>>
>>> The type cannot be constructed error should be fixed on 0.3 by 
>>> https://github.com/jakebolewski/JuliaParser.jl/pull/25. In the mean 
>>> time you could Pkg.pin("JuliaParser", v"0.1.2") and see if that fixes 
>>> the problem on Julia 0.3. (Or a version earlier than v"0.1.2" if 
>>> needed.)
>>>
>>> I’ve come across the cannot resize array with shared data error a while 
>>> ago with the Atom-based Juno. It was fixed by Pkg.checkouting all the 
>>> involved packages. Might be the same for the LightTable-base Juno, worth a 
>>> try maybe.
>>>
>>> — Mike
>>> On Saturday, 19 September 2015 19:09:22 UTC+2, Serge Santos wrote:

 I tried to solve the problem by running Julia 0.4.0-rc2 instead of 
 Julia 0.3.11. I manage to execute a few commands in Juno, but juno/julia 
 is 
 stuck as before. The error message is slightly different though:


- 

WARNING: LightTable.jl: cannot resize array with shared data
 in push! at array.jl:430
 in read_operator at 
 C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:368
 in next_token at 
 C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:752
 in qualifiedname at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:59
 in nexttoken at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:78
 in nextscope! at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:116
 in scopes at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:149
 [inlined code] from C:\Users\Serge\.julia\v0.4\Lazy\src\macros.jl:141
 in codemodule at C:\Users\Serge\.julia\v0.4\Jewel\src\parse/parse.jl:8
 in getmodule at C:\Users\Serge\.julia\v0.4\Jewel\src\eval.jl:42
 in anonymous at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable\eval.jl:51
 in handlecmd at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:65
 in handlenext at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:81
 in server at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:22
 in server at C:\Users\Serge\.julia\v0.4\Jewel\src\Jewel.jl:18
 in include at boot.jl:261
 in include_from_node1 at loading.jl:304
 in process_options at client.jl:308
 in _start at client.jl:411



 On Saturday, 19 September 2015 10:40:49 UTC+1, JKPie wrote:
>
> I have the same problem, I have 

[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread Michael Hatherly


I can’t see LightTable listed in Pkg.status() output in either PC

The LightTable module is part of the Jewel package is seems, 
https://github.com/one-more-minute/Jewel.jl/blob/fb854b0a64047ee642773c0aa824993714ee7f56/src/Jewel.jl#L22,
 
and so won’t show up on Pkg.status() output since it’s not a true package 
by itself. Apologies for the misleading directions there.

What other packages would Juno depend on?

You can manually walk through the REQUIRE files to see what Jewel depends 
on, or use MetadataTools to do it:

julia> using MetadataTools
julia> pkgmeta = get_all_pkg();
julia> graph = make_dep_graph(pkgmeta);
julia> deps = get_pkg_dep_graph("Jewel", graph);
julia> map(println, keys(deps.p_to_i));

You shouldn’t need to change versions for most, if any, of what’s listed 
though. (Don’t forget to call Pkg.free on each package you pin once newer 
versions of the packages are tagged.) Compat 0.7.1 should be far enough 
back I think.

— Mike
​
On Sunday, 20 September 2015 13:29:38 UTC+2, Greg Plowman wrote:
>
> OK I see that second latest tag is v0.1.2 (17 June 2014). Seems a strange 
> jump.
>
> But now I understand pinning, I can use a strategy of rolling back 
> Juno-related packages until Juno works again.
>
> What other packages would Juno depend on?
>
> To help me in this endeavour, I have access to another PC on which Juno 
> runs (almost) without error.
> Confusingly, Pkg.status() reports JuliaParser v0.6.2 on this second PC
> Jewel is v1.0.6 on both PCs.
> I can't see LightTable listed in Pkg.status() output in either PC
>
> I think Compat v0.7.2 is also causing ERROR: @doc not defined issue (
> https://groups.google.com/forum/#!topic/julia-users/rsM4hxdkAxg)
> so maybe reverting back to Compat v0.7.0 might also help.
>
> -- Greg
>
>
> On Sunday, September 20, 2015 at 7:08:47 PM UTC+10, Michael Hatherly wrote:
>
>> Before this JuliaParser was at version v0.6.3, are you sure we should try 
>> reverting to v0.1.2?
>>
>> See the tagged versions 
>> https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the 
>> next latest tagged version. You could probably checkout a specific commit 
>> prior to the commit that’s causing the breakage instead though.
>>
>> What version of Jewel.jl and LightTable.jl are you using?
>>
>> — Mike
>> ​
>>
>> On Sunday, 20 September 2015 10:56:22 UTC+2, Greg Plowman wrote:
>>>
>>> Hi,
>>>
>>> I tried Pkg.pin("JuliaParser", v"0.1.2") but now I get the following 
>>> error (multiple times).
>>>
>>> Before this JuliaParser was at version v0.6.3, are you sure we should 
>>> try reverting to v0.1.2?
>>>
>>>
>>> WARNING: LightTable.jl: `skipws` has no method matching skipws(::
>>> TokenStream)
>>>  in scopes at C:\Users\Greg\.julia\v0.3\Jewel\src\parse\scope.jl:148
>>>  in codemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\parse/parse.jl:141
>>>  in filemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\module.jl:93
>>>  in anonymous at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable\misc.jl:
>>> 5
>>>  in handlecmd at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/
>>> LightTable.jl:65
>>>  in handlenext at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/
>>> LightTable.jl:81
>>>  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.
>>> jl:22
>>>  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\Jewel.jl:18
>>>  in include at boot.jl:245
>>>  in include_from_node1 at loading.jl:128
>>>  in process_options at client.jl:285
>>>  in _start at client.jl:354
>>>
>>>
>>>
>>> Any other suggestions?
>>>
>>> --Greg
>>>
>>>
>>> On Sunday, September 20, 2015 at 6:16:10 PM UTC+10, Michael Hatherly 
>>> wrote:
>>>
 The type cannot be constructed error should be fixed on 0.3 by 
 https://github.com/jakebolewski/JuliaParser.jl/pull/25. In the mean 
 time you could Pkg.pin("JuliaParser", v"0.1.2") and see if that fixes 
 the problem on Julia 0.3. (Or a version earlier than v"0.1.2" if 
 needed.)

 I’ve come across the cannot resize array with shared data error a 
 while ago with the Atom-based Juno. It was fixed by Pkg.checkouting 
 all the involved packages. Might be the same for the LightTable-base Juno, 
 worth a try maybe.

 — Mike
 On Saturday, 19 September 2015 19:09:22 UTC+2, Serge Santos wrote:
>
> I tried to solve the problem by running Julia 0.4.0-rc2 instead of 
> Julia 0.3.11. I manage to execute a few commands in Juno, but juno/julia 
> is 
> stuck as before. The error message is slightly different though:
>
>
>- 
>
>WARNING: LightTable.jl: cannot resize array with shared data
> in push! at array.jl:430
> in read_operator at 
> C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:368
> in next_token at 
> C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:752
> in qualifiedname at 
> C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:59
> in nexttoken at 

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Tim Holy
String is not a concrete type. Consider ASCIIString or UTF8String.

But if you don't need the flexibility of a Dict, a composite type will be a 
huge improvement.

--Tim

On Sunday, September 20, 2015 03:55:43 PM Daniel Carrera wrote:
> Hi Steven,
> 
> I am not the OP, I am trying to help the OP with his code. Anyway, the
> first thing I did was replace Dict{Any,Any} by the more explicit
> Dict{String,Float64} but that didn't help. I did not think to try a
> composite type. I might try that later. It would be interesting to figure
> out why the OP's code is so much slower in Julia.
> 
> Cheers,
> Daniel.
> 
> 
> On 20 September 2015 at 15:20, Steven G. Johnson 
> 
> wrote:
> > Daniel, you are still using a Dict of params, which kills type inference.
> > Pass parameters directly or put them in (typed) fields of a composite
> > type.
> > 
> > (On the other hand, common misconception: there is no performance need to
> > declare the types of function arguments.)



[julia-users] Re: Pkg.build("IJulia") failed with rc2

2015-09-20 Thread Johan Sigfrids
The cycle of errors is not infinite, just very long. I left it running over 
night and next morning it was finished.

On Sunday, September 20, 2015 at 11:15:35 AM UTC+3, Chris Stook wrote:
>
> After updating to rc2, Pkg.build("IJulia") resulted in an infinite cycle 
> of errors.  After watching the errors for a a few minutes I hit ctrl-C a 
> few times to stop.  I then tried to build WinRPM.  This also resulted is a 
> cycle of errors.  Next I quit julia, deleted the v0.4 directory, restarted 
> julia and Pkg.add("IJulia").  This also resulted in a cycle of errors.
>
> Now jupiter will not run for Julia 0.3.11 or 0.4.0-rc2.  It does still 
> work with Python 3.
>
> I'm going to go back to rc1 and see if the problem goes away.
>
> -Chris
>
>

[julia-users] Re: Julia 0.4 RC ppa?

2015-09-20 Thread Glen O
Is there a reason why the juliareleases ppa couldn't provide a julia0.4 
package, separately from the currently julia package? I've seen similar 
things done with packages elsewhere, including within the main ubuntu 
repositories. Indeed, given the changes happening to the language, perhaps 
it's a good idea to start keeping major versions of julia separate (that 
is, make it julia0.3 and julia0.4, with julia being a dependency package 
that will pull in the latest stable julia (ie/ it will point to julia0.3 
until julia0.4 is properly released, then it will point to julia0.4).

This also minimises issues for people who might have julia 0.3 currently 
installed and are actively using it, and don't want to accidentally update 
to 0.4 and have to alter all of their code to account for changes in the 
language - they would just remove the dependency package, and be guaranteed 
to remain with julia0.3 only.

I do understand why it might be considered too much of a nuisance for the 
relatively short RC period, when we can wait for the proper release, but 
I'm probably not the only person who isn't up to using an in-development 
version (nightlies), but is willing to use one that might just be slightly 
buggy (release candidate), and who doesn't want to fiddle with installation 
or compilation.

On Monday, 21 September 2015 00:47:10 UTC+10, Tony Kelman wrote:
>
> Actually it would be expected for julianightlies to be providing 0.5-dev 
> nightlies right now but it's been failing to update for some time due to 
> build system changes on master. We have more flexibility and control over 
> the linux tarball binaries than we do over the ppa. I don't think the ppa 
> has any effective mechanism to provide release candidates right now.



[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread Dongning Guo
In case you're stuck, this may be a way out:
I installed Atom editor and it seems Julia (v0.5??? nightly build)
works with it after installing a few packages.  I'm learning to use
the new environment ...
See https://github.com/JunoLab/atom-julia-client/tree/master/manual


On Sunday, September 20, 2015 at 6:59:02 AM UTC-5, Michael Hatherly wrote:
>
> I can’t see LightTable listed in Pkg.status() output in either PC
>
> The LightTable module is part of the Jewel package is seems, 
> https://github.com/one-more-minute/Jewel.jl/blob/fb854b0a64047ee642773c0aa824993714ee7f56/src/Jewel.jl#L22,
>  
> and so won’t show up on Pkg.status() output since it’s not a true package 
> by itself. Apologies for the misleading directions there.
>
> What other packages would Juno depend on?
>
> You can manually walk through the REQUIRE files to see what Jewel depends 
> on, or use MetadataTools to do it:
>
> julia> using MetadataTools
> julia> pkgmeta = get_all_pkg();
> julia> graph = make_dep_graph(pkgmeta);
> julia> deps = get_pkg_dep_graph("Jewel", graph);
> julia> map(println, keys(deps.p_to_i));
>
> You shouldn’t need to change versions for most, if any, of what’s listed 
> though. (Don’t forget to call Pkg.free on each package you pin once newer 
> versions of the packages are tagged.) Compat 0.7.1 should be far enough 
> back I think.
>
> — Mike
> ​
> On Sunday, 20 September 2015 13:29:38 UTC+2, Greg Plowman wrote:
>>
>> OK I see that second latest tag is v0.1.2 (17 June 2014). Seems a strange 
>> jump.
>>
>> But now I understand pinning, I can use a strategy of rolling back 
>> Juno-related packages until Juno works again.
>>
>> What other packages would Juno depend on?
>>
>> To help me in this endeavour, I have access to another PC on which Juno 
>> runs (almost) without error.
>> Confusingly, Pkg.status() reports JuliaParser v0.6.2 on this second PC
>> Jewel is v1.0.6 on both PCs.
>> I can't see LightTable listed in Pkg.status() output in either PC
>>
>> I think Compat v0.7.2 is also causing ERROR: @doc not defined issue (
>> https://groups.google.com/forum/#!topic/julia-users/rsM4hxdkAxg)
>> so maybe reverting back to Compat v0.7.0 might also help.
>>
>> -- Greg
>>
>>
>> On Sunday, September 20, 2015 at 7:08:47 PM UTC+10, Michael Hatherly 
>> wrote:
>>
>>> Before this JuliaParser was at version v0.6.3, are you sure we should 
>>> try reverting to v0.1.2?
>>>
>>> See the tagged versions 
>>> https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the 
>>> next latest tagged version. You could probably checkout a specific commit 
>>> prior to the commit that’s causing the breakage instead though.
>>>
>>> What version of Jewel.jl and LightTable.jl are you using?
>>>
>>> — Mike
>>> ​
>>>
>>> On Sunday, 20 September 2015 10:56:22 UTC+2, Greg Plowman wrote:

 Hi,

 I tried Pkg.pin("JuliaParser", v"0.1.2") but now I get the following 
 error (multiple times).

 Before this JuliaParser was at version v0.6.3, are you sure we should 
 try reverting to v0.1.2?


 WARNING: LightTable.jl: `skipws` has no method matching skipws(::
 TokenStream)
  in scopes at C:\Users\Greg\.julia\v0.3\Jewel\src\parse\scope.jl:148
  in codemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\parse/parse.jl:
 141
  in filemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\module.jl:93
  in anonymous at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable\misc.jl
 :5
  in handlecmd at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/
 LightTable.jl:65
  in handlenext at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/
 LightTable.jl:81
  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable
 .jl:22
  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\Jewel.jl:18
  in include at boot.jl:245
  in include_from_node1 at loading.jl:128
  in process_options at client.jl:285
  in _start at client.jl:354



 Any other suggestions?

 --Greg


 On Sunday, September 20, 2015 at 6:16:10 PM UTC+10, Michael Hatherly 
 wrote:

> The type cannot be constructed error should be fixed on 0.3 by 
> https://github.com/jakebolewski/JuliaParser.jl/pull/25. In the mean 
> time you could Pkg.pin("JuliaParser", v"0.1.2") and see if that fixes 
> the problem on Julia 0.3. (Or a version earlier than v"0.1.2" if 
> needed.)
>
> I’ve come across the cannot resize array with shared data error a 
> while ago with the Atom-based Juno. It was fixed by Pkg.checkouting 
> all the involved packages. Might be the same for the LightTable-base 
> Juno, 
> worth a try maybe.
>
> — Mike
> On Saturday, 19 September 2015 19:09:22 UTC+2, Serge Santos wrote:
>>
>> I tried to solve the problem by running Julia 0.4.0-rc2 instead of 
>> Julia 0.3.11. I manage to execute a few commands in Juno, but juno/julia 
>> is 
>> stuck as before. The error message is 

[julia-users] linspace and HDF5

2015-09-20 Thread Jan Strube
I'm trying to write some data and axis definitions to HDF5 for later 
plotting in pyplot. (Because I haven't figured out how to do lognorm on 
pcolormesh in PyPlot.jl)
Writing the data - a 2D array - is no problem.
Writing the axes - linspace(min, max, 100) - doesn't work, because I just 
found out that linspace creates a LinSpace object, not an array, and HDF5 
doesn't know how to write that.
My question is: What is an idiomatic way to turn LinSpace into an Array? Is 
collect the recommended way to do this?



Re: [julia-users] When does colon indexing get evaluated / converted?

2015-09-20 Thread 'Greg Plowman' via julia-users
To further clarify, I thought I could specialise getindex / setindex! on colon 
type argument.

see below, getindex2 is being called, but getindex is not being called.
A[:] calls getindex(A::AbstractArray{T,N},I::AbstractArray{T,N}) at 
abstractarray.jl:380
presumably after [:] has been converted to [1:8]



julia> getindex(A::FArray, ::Colon) = A[start(A) : endof(A)]
getindex (generic function with 177 methods)

julia> getindex2(A::FArray, ::Colon) = A[start(A) : endof(A)]
getindex2 (generic function with 1 method)

julia> A = FZeros(Int, -2:8)
11-element FArray{Int64,1} (-2:8,)

julia> A[:]
8-element Array{Int64,1}:
 0
 0
 0
 0
 0
 0
 0
 0

julia> @which A[:]
getindex(A::AbstractArray{T,N},I::AbstractArray{T,N}) at 
abstractarray.jl:380

julia> getindex2(A, :)
11-element Array{Int64,1}:
 0
 0
 0
 0
 0
 0
 0
 0
 0
 0
 0

julia>




Re: [julia-users] When does colon indexing get evaluated / converted?

2015-09-20 Thread Spencer Russell
Can you post the code (or link to the repo) where you define your FArray type?

-s

> On Sep 21, 2015, at 12:38 AM, 'Greg Plowman' via julia-users 
>  wrote:
> 
> To further clarify, I thought I could specialise getindex / setindex! on 
> colon type argument.
> 
> see below, getindex2 is being called, but getindex is not being called.
> A[:] calls getindex(A::AbstractArray{T,N},I::AbstractArray{T,N}) at 
> abstractarray.jl:380
> presumably after [:] has been converted to [1:8]
> 
> 
> 
> julia> getindex(A::FArray, ::Colon) = A[start(A) : endof(A)]
> getindex (generic function with 177 methods)
> 
> julia> getindex2(A::FArray, ::Colon) = A[start(A) : endof(A)]
> getindex2 (generic function with 1 method)
> 
> julia> A = FZeros(Int, -2:8)
> 11-element FArray{Int64,1} (-2:8,)
> 
> julia> A[:]
> 8-element Array{Int64,1}:
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  0
> 
> julia> @which A[:]
> getindex(A::AbstractArray{T,N},I::AbstractArray{T,N}) at abstractarray.jl:380
> 
> julia> getindex2(A, :)
> 11-element Array{Int64,1}:
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  0
>  0
> 
> julia>
> 
> 



[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread Serge Santos
This is working! I rolled back and pinned all packages with different 
version numbers and Juno runs again. Thank you so much, Greg, Very much 
appreciated.

On Monday, 21 September 2015 02:09:56 UTC+1, Greg Plowman wrote:
>
> Serge,
>
> Below is output of Pkg.status():
>
> I had previously tried removing many packages, but nothing is pinned (so 
> not sure if rolling back will result in same config)
>
> The only error message I receive using Juno is:
> symbol could not be found jl_generating_output (-1): The specified 
> procedure could not be found.
>
> but after this everything seems to work as normal.
>
> Hope this helps.
> -- Greg
>
>
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "help()" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.3.11 (2015-07-27 06:18 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> |__/   |  x86_64-w64-mingw32
> julia> Pkg.status()
> 9 required packages:
>  - Dates 0.3.2
>  - Distributions 0.8.6
>  - HDF5  0.5.5
>  - ImageView 0.1.16
>  - Images0.4.47
>  - JLD   0.5.4
>  - Jewel 1.0.6
>  - Optim 0.4.2
>  - ZMQ   0.2.0
> 39 additional packages:
>  - ArrayViews0.6.3
>  - BinDeps   0.3.15
>  - Blosc 0.1.4
>  - Cairo 0.2.30
>  - Calculus  0.1.10
>  - ColorTypes0.1.4
>  - ColorVectorSpace  0.0.3
>  - Colors0.5.3
>  - Compat0.7.1
>  - Compose   0.3.15
>  - DataStructures0.3.12
>  - Docile0.5.18
>  - DualNumbers   0.1.3
>  - FactCheck 0.4.0
>  - FixedPointNumbers 0.0.10
>  - Graphics  0.1.0
>  - HttpCommon0.1.2
>  - IniFile   0.2.4
>  - Iterators 0.1.8
>  - JSON  0.4.5
>  - JuliaParser   0.6.2
>  - LNR   0.0.1
>  - Lazy  0.10.0
>  - LibExpat  0.0.8
>  - MacroTools0.2.0
>  - NaNMath   0.1.0
>  - PDMats0.3.5
>  - Reexport  0.0.3
>  - Requires  0.2.0
>  - SHA   0.1.1
>  - SIUnits   0.0.5
>  - StatsBase 0.7.2
>  - StatsFuns 0.1.3
>  - TexExtensions 0.0.2
>  - Tk0.3.6
>  - URIParser 0.0.7
>  - WinRPM0.1.12
>  - Winston   0.11.12
>  - Zlib  0.1.9
> Julia>
>
>
>
> On Monday, September 21, 2015 at 9:44:24 AM UTC+10, Serge Santos wrote:
>
> Hi All,
>
> I tried to roll back to JuliaParser v0.6.2 and it didn't work. 
>
> If someone still manages to successfully run Juno with Julia 0.3.11, can 
> you please send the list of packages with version numbers that does not 
> create any issues with Juno (i.e,, output from Pkg.status()). I was not 
> able to figure out what combination of versions work. 
>
> Many thanks in advance
> Serge
>
> On Monday, 21 September 2015 00:16:27 UTC+1, Greg Plowman wrote:
>
> Hi All,
>
> On 2 different PCs where Juno works (almost without error) Pkg.status() 
> reports JuliaParser v0.6.2
> On PC that has Juno errors, Pkg.status() reports JuliaParser v0.6.3
> Rolling back to JuliaParser v0.1.2 creates different errors.
> So it seems we need to revert to JuliaParser v0.6.2
>
> I'm not at a PC where I can see if we can pin v0.6.2, in light of the 
> following:
>
> Before this JuliaParser was at version v0.6.3, are you sure we should try 
> reverting to v0.1.2?
>
> See the tagged versions 
>
> https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the 
> next latest tagged version. You could probably checkout a specific commit 
> prior to the commit that’s causing the breakage instead though.
>
> Also, I don't want to play around with Pkg.ANYTHING on a working 
> configuration at the moment :)
>
> -- Greg
>
>
> On Monday, September 21, 2015 at 4:50:37 AM UTC+10, Tony Kelman wrote:
>
> What's temporarily broken here is some of the packages that 
> Light-Table-based Juno relies on to work. In the meantime you can still use 
> command-line REPL Julia, and while it's not the most friendly interface 
> your code will still run. Your estimation of the Julia ecosystem's 
> robustness is pretty accurate though, if you really want to ensure things 
> stay working the 

[julia-users] Re: Couldnt connect to Julia @doc not defined

2015-09-20 Thread andy hayden
Sorry about that. I didn't think it would break anything. Thanks for 
reverting.

Best,
Andy

On Sunday, 20 September 2015 03:14:28 UTC-7, Michael Hatherly wrote:
>
> Tracked down to this, https://github.com/JuliaLang/Compat.jl/pull/126, 
> Compat.jl change.
>
> — Mike
>
> On Sunday, 20 September 2015 11:43:36 UTC+2, nisha.j…@west.cmu.edu wrote:
>
> Hello,
>>
>> I am not able to run from LightTable but it runs fine from the terminal
>> I already tried Pkg.update() and also updated all outdated plugins.
>>  
>> Any ideas why??
>>
>> ERROR: @doc not defined
>>  in include at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in include_from_node1 at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in reload_path at loading.jl:152
>>  in _require at loading.jl:67
>>  in require at loading.jl:54
>>  in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
>>  in include at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in include_from_node1 at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in reload_path at loading.jl:152
>>  in _require at loading.jl:67
>>  in require at loading.jl:54
>>  in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
>>  in include at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in include_from_node1 at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in reload_path at loading.jl:152
>>  in _require at loading.jl:67
>>  in require at loading.jl:54
>>  in require at /Users/Nisha/.julia/v0.3/Requires/src/require.jl:12
>>  in include at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in include_from_node1 at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in include at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in include_from_node1 at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in reload_path at loading.jl:152
>>  in _require at loading.jl:67
>>  in require at loading.jl:51
>>  in include at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in include_from_node1 at loading.jl:128
>>  in process_options at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  in _start at 
>> /Applications/Julia-0.3.11.app/Contents/Resources/julia/lib/julia/sys.dylib
>> while loading /Users/Nisha/.julia/v0.3/ColorTypes/src/ColorTypes.jl, in 
>> expression starting on line 86
>> while loading /Users/Nisha/.julia/v0.3/Colors/src/Colors.jl, in 
>> expression starting on line 5
>> while loading /Users/Nisha/.julia/v0.3/Compose/src/Compose.jl, in 
>> expression starting on line 5
>> while loading /Users/Nisha/.julia/v0.3/Jewel/src/profile/profile.jl, in 
>> expression starting on line 3
>> while loading /Users/Nisha/.julia/v0.3/Jewel/src/Jewel.jl, in expression 
>> starting on line 15
>> while loading /Users/Nisha/Library/Application 
>> Support/LightTable/plugins/Julia/jl/init.jl, in expression starting on line 
>> 27
>>
> ​
>


Re: [julia-users] When does colon indexing get evaluated / converted?

2015-09-20 Thread Spencer Russell
Hi Greg,

This doesn’t answer your question directly, but I recommend you check out the 
Interfaces  
chapter of the manual for some good info on creating your own array type. To 
get indexing working the most important parts are:

1. subtype AbstractArray
2. implement the Base.linearindexing(Type) method
3. implement either getindex(A, i::Int) or getindex(A, i1::Int, ..., iN::Int), 
(and the setindex! versions)  depending on step 2.

Then the various indexing behaviors (ranges, etc.) should work for your type, 
as under-the-hood they boil down to the more basic indexing method that you 
define in step 3. One thing to note that’s not spelled out super clearly in the 
manual yet is that if you want the results of range indexing to be wrapped in 
your type, you should also implement `similar`. There’s some more details on 
that in this pr .

OT: something about the way your email client is configured is causing you to 
show up as julia-users@googlegroups.com  
where for other users I see their names (at least in my email client), so it’s 
a bit hard to see who you are in lists of thread participant names.

-s




> On Sep 20, 2015, at 9:34 PM, 'Greg Plowman' via julia-users 
>  wrote:
> 
> Hi,
> 
> I'm trying to define a custom Array type that can be indexed using arbitrary 
> ranges.
> 
> e.g. A = MyArray(Int, 3:8) would define a 6-element vector with indexes 
> ranging from 3 to 8, rather than the default 1 to 6.
> 
> I've made some progress, but am now stuck on how to handle colon indexing.
> 
> A[4:6] works by defining appropriate getindex and setindex!
> 
> e.g.  setindex!{T,S<:Real}(A::MyArray{T,1}, value, I::AbstractVector{S}) = ...
> 
> but A[:] = 0 seems to get translated to A[1:6] before dispatch on setindex!, 
> so I can't hijack the call.
> 
> From subarray.jl, the code below suggests I can specialise on the Colon type, 
> but this doesn't seem to work for me. Colon appears to be converted to 
> UnitRange before calling setindex!
> 
> sub(A::AbstractArray, I::Union(RangeIndex, Colon)...) = sub(A, 
> ntuple(length(I), i-> isa(I[i], Colon) ? (1:size(A,i)) : I[i])...)
> 
> 
> Is there a way around this?
> Should I be able to specialise on the colon argument?
> 
> -- Greg



[julia-users] Check if type "contains" Any

2015-09-20 Thread Tommy Hofmann
I would like to write a function "contains_any(T)" which checks wether a 
type T "contains" Any. For example

contains_any(Integer) = false
contains_any(Any) = true
contains_any(Tuple{Integer, Any}) = true
contains_any(Tuple{Tuple{Integer, Any}, Float64}) = true

etc. How could one do this in julia?

Thanks,
Tommy


Re: [julia-users] linspace and HDF5

2015-09-20 Thread Tom Breloff
Yes you should probably use `collect`.

With regards to plotting... can you post the pyplot code that generates the
graph that you want?  We may be able to either show you how to do it in
julia, or it will help in future development by pinpointing a deficiency.
Thanks.

On Sun, Sep 20, 2015 at 11:11 PM, Jan Strube  wrote:

> I'm trying to write some data and axis definitions to HDF5 for later
> plotting in pyplot. (Because I haven't figured out how to do lognorm on
> pcolormesh in PyPlot.jl)
> Writing the data - a 2D array - is no problem.
> Writing the axes - linspace(min, max, 100) - doesn't work, because I just
> found out that linspace creates a LinSpace object, not an array, and HDF5
> doesn't know how to write that.
> My question is: What is an idiomatic way to turn LinSpace into an Array?
> Is collect the recommended way to do this?
>
>


Re: [julia-users] When does colon indexing get evaluated / converted?

2015-09-20 Thread 'Greg Plowman' via julia-users
Hi Spencer,

Thanks for your reply.

Actually I'm trying to follow the Interfaces chapter.

I'm using Julia v0.3, so that might be a problem.

Also I think my problem is mainly for vectors (1-dimensional array), 
because linear/subscript index has same syntax.

For multi-dimensional arrays, I can map subscript indexes to the default 
1:length linear indexes.

A = MyArray(Int, -2:8, -2:6) # defines 11x9 Matrix
A[:] gets translated to A[1:99], which can be handled by linear indexing

B = MyArray(Int, -2:8) # defines 11-element Vector
B[:] gets translated to B[1:8], I need it to be translated to B[-2:8]

So I'm trying to find out where this conversion happens, so I can redefine 
it.

-- Greg

PS Not really sure what to do about email name. Name on julia-users shows:
me 
 (Greg Plowman change )   
 


On Monday, September 21, 2015 at 1:15:42 PM UTC+10, Spencer Russell wrote:

> Hi Greg,
>
> This doesn’t answer your question directly, but I recommend you check out 
> the Interfaces 
>  chapter 
> of the manual for some good info on creating your own array type. To get 
> indexing working the most important parts are:
>
> 1. subtype AbstractArray
> 2. implement the Base.linearindexing(Type) method
> 3. implement either getindex(A, i::Int) or 
> getindex(A, i1::Int, ..., iN::Int), (and the setindex! versions)  depending 
> on step 2.
>
> Then the various indexing behaviors (ranges, etc.) should work for your 
> type, as under-the-hood they boil down to the more basic indexing method 
> that you define in step 3. One thing to note that’s not spelled out super 
> clearly in the manual yet is that if you want the results of range indexing 
> to be wrapped in your type, you should also implement `similar`. There’s 
> some more details on that in this pr 
> .
>
> OT: something about the way your email client is configured is causing you 
> to show up as julia...@googlegroups.com  where for other 
> users I see their names (at least in my email client), so it’s a bit hard 
> to see who you are in lists of thread participant names.
>
> -s
>
>
>
>
> On Sep 20, 2015, at 9:34 PM, 'Greg Plowman' via julia-users <
> julia...@googlegroups.com > wrote:
>
> Hi,
>
> I'm trying to define a custom Array type that can be indexed using 
> arbitrary ranges.
>
> e.g. A = MyArray(Int, 3:8) would define a 6-element vector with indexes 
> ranging from 3 to 8, rather than the default 1 to 6.
>
> I've made some progress, but am now stuck on how to handle colon indexing.
>
> A[4:6] works by defining appropriate getindex and setindex!
>
> e.g.  setindex!{T,S<:Real}(A::MyArray{T,1}, value, I::AbstractVector{S}) 
> = ...
>
> but A[:] = 0 seems to get translated to A[1:6] before dispatch on 
> setindex!, so I can't hijack the call.
>
> From subarray.jl, the code below suggests I can specialise on the Colon 
> type, but this doesn't seem to work for me. Colon appears to be converted 
> to UnitRange *before* calling setindex!
>
> sub(A::AbstractArray, I::Union(RangeIndex, Colon)...) = sub(A, ntuple(
> length(I), i-> isa(I[i], Colon) ? (1:size(A,i)) : I[i])...)
>
>
> Is there a way around this?
> Should I be able to specialise on the colon argument?
>
> -- Greg
>
>
>

[julia-users] Array of DArrays

2015-09-20 Thread naelson
I want create an Array of DArrays and I'm using


a = Array(DArray, 5)

but when I do a[1] = dfill(1,1)

I get the error:
ERROR: `convert` has no method matching convert(::Type{DArray{T,N,A}}, 
::Int64)


Where am I doing wrong? 

How can I make it to work?


[julia-users] Re: Array of DArrays

2015-09-20 Thread naelson
Update: I'm using RemoteRefs instead of DArrays inside the Array an it 
works fine.

On Sunday, September 20, 2015 at 5:56:21 PM UTC-3, nae...@ic.ufal.br wrote:
>
> I want create an Array of DArrays and I'm using
>
>
> a = Array(DArray, 5)
>
> but when I do a[1] = dfill(1,1)
>
> I get the error:
> ERROR: `convert` has no method matching convert(::Type{DArray{T,N,A}}, 
> ::Int64)
>
>
> Where am I doing wrong? 
>
> How can I make it to work?
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
This is pure speculation, but the reason you don't get the same improvement
could be that 0.4 is somehow more intelligent about the use of `reshape()`.
Maybe one of the reasons the program has been running faster on your
computer all along is that 0.4 was getting handling the memory better to
begin with. I'm curious to see how the new code works for Adam.

slice() does exist in 0.3. I have never seen it before; I have no idea if
it is slow or fast. I will try your changes as soon as I get a chance.

Cheers,
Daniel.

On 20 September 2015 at 23:13, Kristoffer Carlsson 
wrote:

> For me, your latest changes made the time go from 0.13 -> 0.11. It is
> strange we have so different performances, but then again 0.3 and 0.4 are
> different beasts.
>
> Adding some calls to slice and another loop gained some perf for me. Can
> you try:
>
> https://gist.github.com/KristofferC/8a8ff33cb186183eea8d
>
> On Sunday, September 20, 2015 at 9:36:20 PM UTC+2, Daniel Carrera wrote:
>>
>> Just another note:
>>
>> I suspect that the `reshape()` might be the guilty party. I am just
>> guessing here, but I suspect that the reshape() forces a memory copy, while
>> a regular slice just creates kind of symlink to the original data.
>> Furthermore, I suspect that the memory copy would mean that when you try to
>> read from the newly created variable, you have to fetch it from RAM,
>> despite the fact that the CPU cache already has a perfectly good copy of
>> the same data.
>>
>> Cheers,
>> Daniel.
>>
>>
>> On 20 September 2015 at 21:25, Daniel Carrera  wrote:
>>
>>> Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using
>>> Profile.print() I found that the hottest parts of the code appeared to be
>>> the if-conditions, such as:
>>>
>>> if o < b_hist[j,3]
>>>
>>> It occurred to me that this could be due to cache misses, so I rewrote
>>> the code to store the data more compactly:
>>>
>>> -  b_hist = reshape(sim.b[d, 1:t, i, :], t, 3)
>>> +  b_hist_1 = sim.b[d, 1:t, i, 1]
>>> +  b_hist_3 = sim.b[d, 1:t, i, 3]
>>> ...
>>> -if o < b_hist[j,3]
>>> +if o < b_hist_3[j]
>>>
>>>
>>> So, instead of an 3xN array, I store two 1xN arrays with the data I
>>> actually want. I suspect that the biggest improvement is not that there is
>>> 1/3 less data, but that the data just gets managed differently. The upshot
>>> is that now the program runs 208 times faster for me than it did initially.
>>> For me time execution time went from 45s to 0.2s.
>>>
>>> As always, the code is updated on Github:
>>>
>>> https://github.com/dcarrera/sim
>>>
>>> Cheers,
>>> Daniel.
>>>
>>>
>>>
>>> On 20 September 2015 at 20:51, Seth  wrote:
>>>
 As an interim step, you can also get text profiling information using
 Profile.print() if the graphics aren't working.

 On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera
 wrote:
>
> Hmm... ProfileView gives me an error:
>
> ERROR: panzoom not defined
>  in view at
> /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>  in include at ./boot.jl:245
>  in include_from_node1 at ./loading.jl:128
> while loading /home/daniel/Projects/optimization/run_sim.jl, in
> expression starting on line 55
>
> Do I need to update something?
>
> Cheers,
> Daniel.
>
> On 20 September 2015 at 20:28, Kristoffer Carlsson  > wrote:
>
>> https://github.com/timholy/ProfileView.jl is invaluable for
>> performance tweaking.
>>
>> Are you on 0.4?
>>
>> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan
>> Bouchet-Valat wrote:
>>>
>>> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit
>>> :
>>> >
>>> >
>>> > On 20 September 2015 at 19:43, Kristoffer Carlsson <
>>> > kcarl...@gmail.com> wrote:
>>> > > Did you run the code twice to not time the JIT compiler?
>>> > >
>>> > > For me, my version runs in 0.24 and Daniels in 0.34.
>>> > >
>>> > > Anyway, adding this to Daniels version:
>>> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes
>>> it
>>> > > run in 0.13 seconds for me.
>>> > >
>>> > >
>>> >
>>> > Interesting. For me that change only makes a 10-20% improvement.
>>> On
>>> > my laptop the program takes about 1.5s which is similar to Adam's.
>>> So
>>> > I guess we are running on similar hardware and you are probably
>>> using
>>> > a faster desktop. In any case, I added the change and updated the
>>> > repository:
>>> >
>>> > https://github.com/dcarrera/sim
>>> >
>>> > Is there a good way to profile Julia code? So I have been
>>> profiling
>>> > by inserting tic() and toc() lines everywhere. On my computer
>>> > @profile seems to do the same 

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
I just tried it. You are right. Using slice() makes the code 9x slower for
me.

On 20 September 2015 at 23:14, Kristoffer Carlsson 
wrote:

> Oh, if you are on 0.3 I am not sure if slice exist. If it does, it is
> really slow.
>
> On Sunday, September 20, 2015 at 11:13:21 PM UTC+2, Kristoffer Carlsson
> wrote:
>>
>> For me, your latest changes made the time go from 0.13 -> 0.11. It is
>> strange we have so different performances, but then again 0.3 and 0.4 are
>> different beasts.
>>
>> Adding some calls to slice and another loop gained some perf for me. Can
>> you try:
>>
>> https://gist.github.com/KristofferC/8a8ff33cb186183eea8d
>>
>> On Sunday, September 20, 2015 at 9:36:20 PM UTC+2, Daniel Carrera wrote:
>>>
>>> Just another note:
>>>
>>> I suspect that the `reshape()` might be the guilty party. I am just
>>> guessing here, but I suspect that the reshape() forces a memory copy, while
>>> a regular slice just creates kind of symlink to the original data.
>>> Furthermore, I suspect that the memory copy would mean that when you try to
>>> read from the newly created variable, you have to fetch it from RAM,
>>> despite the fact that the CPU cache already has a perfectly good copy of
>>> the same data.
>>>
>>> Cheers,
>>> Daniel.
>>>
>>>
>>> On 20 September 2015 at 21:25, Daniel Carrera  wrote:
>>>
 Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using
 Profile.print() I found that the hottest parts of the code appeared to be
 the if-conditions, such as:

 if o < b_hist[j,3]

 It occurred to me that this could be due to cache misses, so I rewrote
 the code to store the data more compactly:

 -  b_hist = reshape(sim.b[d, 1:t, i, :], t, 3)
 +  b_hist_1 = sim.b[d, 1:t, i, 1]
 +  b_hist_3 = sim.b[d, 1:t, i, 3]
 ...
 -if o < b_hist[j,3]
 +if o < b_hist_3[j]


 So, instead of an 3xN array, I store two 1xN arrays with the data I
 actually want. I suspect that the biggest improvement is not that there is
 1/3 less data, but that the data just gets managed differently. The upshot
 is that now the program runs 208 times faster for me than it did initially.
 For me time execution time went from 45s to 0.2s.

 As always, the code is updated on Github:

 https://github.com/dcarrera/sim

 Cheers,
 Daniel.



 On 20 September 2015 at 20:51, Seth  wrote:

> As an interim step, you can also get text profiling information using
> Profile.print() if the graphics aren't working.
>
> On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera
> wrote:
>>
>> Hmm... ProfileView gives me an error:
>>
>> ERROR: panzoom not defined
>>  in view at
>> /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>>  in include at ./boot.jl:245
>>  in include_from_node1 at ./loading.jl:128
>> while loading /home/daniel/Projects/optimization/run_sim.jl, in
>> expression starting on line 55
>>
>> Do I need to update something?
>>
>> Cheers,
>> Daniel.
>>
>> On 20 September 2015 at 20:28, Kristoffer Carlsson <
>> kcarl...@gmail.com> wrote:
>>
>>> https://github.com/timholy/ProfileView.jl is invaluable for
>>> performance tweaking.
>>>
>>> Are you on 0.4?
>>>
>>> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan
>>> Bouchet-Valat wrote:

 Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit
 :
 >
 >
 > On 20 September 2015 at 19:43, Kristoffer Carlsson <
 > kcarl...@gmail.com> wrote:
 > > Did you run the code twice to not time the JIT compiler?
 > >
 > > For me, my version runs in 0.24 and Daniels in 0.34.
 > >
 > > Anyway, adding this to Daniels version:
 > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes
 it
 > > run in 0.13 seconds for me.
 > >
 > >
 >
 > Interesting. For me that change only makes a 10-20% improvement.
 On
 > my laptop the program takes about 1.5s which is similar to
 Adam's. So
 > I guess we are running on similar hardware and you are probably
 using
 > a faster desktop. In any case, I added the change and updated the
 > repository:
 >
 > https://github.com/dcarrera/sim
 >
 > Is there a good way to profile Julia code? So I have been
 profiling
 > by inserting tic() and toc() lines everywhere. On my computer
 > @profile seems to do the same thing as @time, so it's kind of
 useless
 > if I want to find the hot spots in a program.
 Sure :
 

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Kristoffer Carlsson
sim is defined inside the while loop which means that it goes out of scope 
after the while loop end, 
see: http://julia.readthedocs.org/en/latest/manual/variables-and-scoping/

If you want to run a number of sims you could for example create an empty 
vector in the start of main and push the sims into it and then return the 
vector of sims in the end of the function.

On Sunday, September 20, 2015 at 11:50:18 PM UTC+2, Adam wrote:
>
> Thanks Daniel! That code ran in about 0.3 seconds on my machine as well. 
> More good progress! This puts Julia about ~5x faster than Matlab here. 
>
> I tried placing "return sim" at the end of your main() function, but I 
> still got an error saying "sim not defined." Why is that? Can I return 
> output from the simulation?
>
> On Sunday, September 20, 2015 at 4:14:08 PM UTC-5, Kristoffer Carlsson 
> wrote:
>>
>> Oh, if you are on 0.3 I am not sure if slice exist. If it does, it is 
>> really slow.
>>
>> On Sunday, September 20, 2015 at 11:13:21 PM UTC+2, Kristoffer Carlsson 
>> wrote:
>>>
>>> For me, your latest changes made the time go from 0.13 -> 0.11. It is 
>>> strange we have so different performances, but then again 0.3 and 0.4 are 
>>> different beasts.
>>>
>>> Adding some calls to slice and another loop gained some perf for me. Can 
>>> you try:
>>>
>>> https://gist.github.com/KristofferC/8a8ff33cb186183eea8d
>>>
>>> On Sunday, September 20, 2015 at 9:36:20 PM UTC+2, Daniel Carrera wrote:

 Just another note:

 I suspect that the `reshape()` might be the guilty party. I am just 
 guessing here, but I suspect that the reshape() forces a memory copy, 
 while 
 a regular slice just creates kind of symlink to the original data. 
 Furthermore, I suspect that the memory copy would mean that when you try 
 to 
 read from the newly created variable, you have to fetch it from RAM, 
 despite the fact that the CPU cache already has a perfectly good copy of 
 the same data.

 Cheers,
 Daniel.


 On 20 September 2015 at 21:25, Daniel Carrera  
 wrote:

> Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using 
> Profile.print() I found that the hottest parts of the code appeared to be 
> the if-conditions, such as:
>
> if o < b_hist[j,3]
>
> It occurred to me that this could be due to cache misses, so I rewrote 
> the code to store the data more compactly:
>
> -  b_hist = reshape(sim.b[d, 1:t, i, :], t, 3)
> +  b_hist_1 = sim.b[d, 1:t, i, 1]
> +  b_hist_3 = sim.b[d, 1:t, i, 3]
> ...
> -if o < b_hist[j,3]
> +if o < b_hist_3[j]
>
>
> So, instead of an 3xN array, I store two 1xN arrays with the data I 
> actually want. I suspect that the biggest improvement is not that there 
> is 
> 1/3 less data, but that the data just gets managed differently. The 
> upshot 
> is that now the program runs 208 times faster for me than it did 
> initially. 
> For me time execution time went from 45s to 0.2s.
>
> As always, the code is updated on Github:
>
> https://github.com/dcarrera/sim
>
> Cheers,
> Daniel.
>
>
>
> On 20 September 2015 at 20:51, Seth  wrote:
>
>> As an interim step, you can also get text profiling information using 
>> Profile.print() if the graphics aren't working.
>>
>> On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera 
>> wrote:
>>>
>>> Hmm... ProfileView gives me an error:
>>>
>>> ERROR: panzoom not defined
>>>  in view at 
>>> /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>>>  in view at 
>>> /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>>>  in include at ./boot.jl:245
>>>  in include_from_node1 at ./loading.jl:128
>>> while loading /home/daniel/Projects/optimization/run_sim.jl, in 
>>> expression starting on line 55
>>>
>>> Do I need to update something?
>>>
>>> Cheers,
>>> Daniel.
>>>
>>> On 20 September 2015 at 20:28, Kristoffer Carlsson <
>>> kcarl...@gmail.com> wrote:
>>>
 https://github.com/timholy/ProfileView.jl is invaluable for 
 performance tweaking.

 Are you on 0.4?

 On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan 
 Bouchet-Valat wrote:
>
> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a 
> écrit : 
> > 
> > 
> > On 20 September 2015 at 19:43, Kristoffer Carlsson < 
> > kcarl...@gmail.com> wrote: 
> > > Did you run the code twice to not time the JIT compiler? 
> > > 
> > > For me, my version runs in 0.24 and Daniels in 0.34. 
> > > 
> > > Anyway, adding this to Daniels version: 
> > > 

Re: [julia-users] Re: printing UInt displays hex instead of decimal

2015-09-20 Thread Jesse Johnson


On 09/20/2015 04:11 PM, Milan Bouchet-Valat wrote:
>
> The point is, in Julia using unsigned ints to store values that should
> always be positive is *not* recommended. 

This is starting to get off my main topic of consistently printing
numerical types and allowing the user to change the default print
formats, but you bring up an interesting point I'd like to discuss in a
separate thread.


[julia-users] Is UInt for storing binary strings or unsigned integers?

2015-09-20 Thread Jesse Johnson
In a thread about printing UInt variables, Milan Bouchet-Valat said:
> The point is, in Julia using unsigned ints to store values that should
> always be positive is *not* recommended. 
If that is true, then shouldn't the type be called Byte? It seems the
type has been misnamed if it was never intended to store unsigned integers.

Further, calling the type UInt is misleading to devs from C lang family
who frequently depend on compile-time type checking (ex. int vs. uint)
to help ensure no unexpected signs show up. I am not suggesting
type-checking is a perfect defense against sign errors, and thorough
runtime testing is definitely necessary. In my larger projects combining
type checking and runtime tests is almost a practical necessity and can
seriously cut down on time spent bug hunting sign errors.

That said, I am guessing the suggested solution in Julia is to rely
solely on runtime sign checking? I can't see how I could make that
practical for my use cases, but it would be good to know if that is what
the Julia devs intend.

Thanks!

Jesse


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Kristoffer Carlsson
Oh, if you are on 0.3 I am not sure if slice exist. If it does, it is 
really slow.

On Sunday, September 20, 2015 at 11:13:21 PM UTC+2, Kristoffer Carlsson 
wrote:
>
> For me, your latest changes made the time go from 0.13 -> 0.11. It is 
> strange we have so different performances, but then again 0.3 and 0.4 are 
> different beasts.
>
> Adding some calls to slice and another loop gained some perf for me. Can 
> you try:
>
> https://gist.github.com/KristofferC/8a8ff33cb186183eea8d
>
> On Sunday, September 20, 2015 at 9:36:20 PM UTC+2, Daniel Carrera wrote:
>>
>> Just another note:
>>
>> I suspect that the `reshape()` might be the guilty party. I am just 
>> guessing here, but I suspect that the reshape() forces a memory copy, while 
>> a regular slice just creates kind of symlink to the original data. 
>> Furthermore, I suspect that the memory copy would mean that when you try to 
>> read from the newly created variable, you have to fetch it from RAM, 
>> despite the fact that the CPU cache already has a perfectly good copy of 
>> the same data.
>>
>> Cheers,
>> Daniel.
>>
>>
>> On 20 September 2015 at 21:25, Daniel Carrera  wrote:
>>
>>> Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using 
>>> Profile.print() I found that the hottest parts of the code appeared to be 
>>> the if-conditions, such as:
>>>
>>> if o < b_hist[j,3]
>>>
>>> It occurred to me that this could be due to cache misses, so I rewrote 
>>> the code to store the data more compactly:
>>>
>>> -  b_hist = reshape(sim.b[d, 1:t, i, :], t, 3)
>>> +  b_hist_1 = sim.b[d, 1:t, i, 1]
>>> +  b_hist_3 = sim.b[d, 1:t, i, 3]
>>> ...
>>> -if o < b_hist[j,3]
>>> +if o < b_hist_3[j]
>>>
>>>
>>> So, instead of an 3xN array, I store two 1xN arrays with the data I 
>>> actually want. I suspect that the biggest improvement is not that there is 
>>> 1/3 less data, but that the data just gets managed differently. The upshot 
>>> is that now the program runs 208 times faster for me than it did initially. 
>>> For me time execution time went from 45s to 0.2s.
>>>
>>> As always, the code is updated on Github:
>>>
>>> https://github.com/dcarrera/sim
>>>
>>> Cheers,
>>> Daniel.
>>>
>>>
>>>
>>> On 20 September 2015 at 20:51, Seth  wrote:
>>>
 As an interim step, you can also get text profiling information using 
 Profile.print() if the graphics aren't working.

 On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera 
 wrote:
>
> Hmm... ProfileView gives me an error:
>
> ERROR: panzoom not defined
>  in view at 
> /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>  in include at ./boot.jl:245
>  in include_from_node1 at ./loading.jl:128
> while loading /home/daniel/Projects/optimization/run_sim.jl, in 
> expression starting on line 55
>
> Do I need to update something?
>
> Cheers,
> Daniel.
>
> On 20 September 2015 at 20:28, Kristoffer Carlsson  > wrote:
>
>> https://github.com/timholy/ProfileView.jl is invaluable for 
>> performance tweaking.
>>
>> Are you on 0.4?
>>
>> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan 
>> Bouchet-Valat wrote:
>>>
>>> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit 
>>> : 
>>> > 
>>> > 
>>> > On 20 September 2015 at 19:43, Kristoffer Carlsson < 
>>> > kcarl...@gmail.com> wrote: 
>>> > > Did you run the code twice to not time the JIT compiler? 
>>> > > 
>>> > > For me, my version runs in 0.24 and Daniels in 0.34. 
>>> > > 
>>> > > Anyway, adding this to Daniels version: 
>>> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes 
>>> it 
>>> > > run in 0.13 seconds for me. 
>>> > > 
>>> > > 
>>> > 
>>> > Interesting. For me that change only makes a 10-20% improvement. 
>>> On 
>>> > my laptop the program takes about 1.5s which is similar to Adam's. 
>>> So 
>>> > I guess we are running on similar hardware and you are probably 
>>> using 
>>> > a faster desktop. In any case, I added the change and updated the 
>>> > repository: 
>>> > 
>>> > https://github.com/dcarrera/sim 
>>> > 
>>> > Is there a good way to profile Julia code? So I have been 
>>> profiling 
>>> > by inserting tic() and toc() lines everywhere. On my computer 
>>> > @profile seems to do the same thing as @time, so it's kind of 
>>> useless 
>>> > if I want to find the hot spots in a program. 
>>> Sure : 
>>> http://julia.readthedocs.org/en/latest/manual/profile/ 
>>>
>>>
>>> Regards 
>>>
>>
>
>>>
>>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Kristoffer Carlsson
For me, your latest changes made the time go from 0.13 -> 0.11. It is 
strange we have so different performances, but then again 0.3 and 0.4 are 
different beasts.

Adding some calls to slice and another loop gained some perf for me. Can 
you try:

https://gist.github.com/KristofferC/8a8ff33cb186183eea8d

On Sunday, September 20, 2015 at 9:36:20 PM UTC+2, Daniel Carrera wrote:
>
> Just another note:
>
> I suspect that the `reshape()` might be the guilty party. I am just 
> guessing here, but I suspect that the reshape() forces a memory copy, while 
> a regular slice just creates kind of symlink to the original data. 
> Furthermore, I suspect that the memory copy would mean that when you try to 
> read from the newly created variable, you have to fetch it from RAM, 
> despite the fact that the CPU cache already has a perfectly good copy of 
> the same data.
>
> Cheers,
> Daniel.
>
>
> On 20 September 2015 at 21:25, Daniel Carrera  > wrote:
>
>> Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using 
>> Profile.print() I found that the hottest parts of the code appeared to be 
>> the if-conditions, such as:
>>
>> if o < b_hist[j,3]
>>
>> It occurred to me that this could be due to cache misses, so I rewrote 
>> the code to store the data more compactly:
>>
>> -  b_hist = reshape(sim.b[d, 1:t, i, :], t, 3)
>> +  b_hist_1 = sim.b[d, 1:t, i, 1]
>> +  b_hist_3 = sim.b[d, 1:t, i, 3]
>> ...
>> -if o < b_hist[j,3]
>> +if o < b_hist_3[j]
>>
>>
>> So, instead of an 3xN array, I store two 1xN arrays with the data I 
>> actually want. I suspect that the biggest improvement is not that there is 
>> 1/3 less data, but that the data just gets managed differently. The upshot 
>> is that now the program runs 208 times faster for me than it did initially. 
>> For me time execution time went from 45s to 0.2s.
>>
>> As always, the code is updated on Github:
>>
>> https://github.com/dcarrera/sim
>>
>> Cheers,
>> Daniel.
>>
>>
>>
>> On 20 September 2015 at 20:51, Seth > > wrote:
>>
>>> As an interim step, you can also get text profiling information using 
>>> Profile.print() if the graphics aren't working.
>>>
>>> On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera wrote:

 Hmm... ProfileView gives me an error:

 ERROR: panzoom not defined
  in view at 
 /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
  in include at ./boot.jl:245
  in include_from_node1 at ./loading.jl:128
 while loading /home/daniel/Projects/optimization/run_sim.jl, in 
 expression starting on line 55

 Do I need to update something?

 Cheers,
 Daniel.

 On 20 September 2015 at 20:28, Kristoffer Carlsson  
 wrote:

> https://github.com/timholy/ProfileView.jl is invaluable for 
> performance tweaking.
>
> Are you on 0.4?
>
> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat 
> wrote:
>>
>> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit : 
>> > 
>> > 
>> > On 20 September 2015 at 19:43, Kristoffer Carlsson < 
>> > kcarl...@gmail.com> wrote: 
>> > > Did you run the code twice to not time the JIT compiler? 
>> > > 
>> > > For me, my version runs in 0.24 and Daniels in 0.34. 
>> > > 
>> > > Anyway, adding this to Daniels version: 
>> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes 
>> it 
>> > > run in 0.13 seconds for me. 
>> > > 
>> > > 
>> > 
>> > Interesting. For me that change only makes a 10-20% improvement. On 
>> > my laptop the program takes about 1.5s which is similar to Adam's. 
>> So 
>> > I guess we are running on similar hardware and you are probably 
>> using 
>> > a faster desktop. In any case, I added the change and updated the 
>> > repository: 
>> > 
>> > https://github.com/dcarrera/sim 
>> > 
>> > Is there a good way to profile Julia code? So I have been profiling 
>> > by inserting tic() and toc() lines everywhere. On my computer 
>> > @profile seems to do the same thing as @time, so it's kind of 
>> useless 
>> > if I want to find the hot spots in a program. 
>> Sure : 
>> http://julia.readthedocs.org/en/latest/manual/profile/ 
>>
>>
>> Regards 
>>
>

>>
>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Adam
Thanks Daniel! That code ran in about 0.3 seconds on my machine as well. 
More good progress! This puts Julia about ~5x faster than Matlab here. 

I tried placing "return sim" at the end of your main() function, but I 
still got an error saying "sim not defined." Why is that? Can I return 
output from the simulation?

On Sunday, September 20, 2015 at 4:14:08 PM UTC-5, Kristoffer Carlsson 
wrote:
>
> Oh, if you are on 0.3 I am not sure if slice exist. If it does, it is 
> really slow.
>
> On Sunday, September 20, 2015 at 11:13:21 PM UTC+2, Kristoffer Carlsson 
> wrote:
>>
>> For me, your latest changes made the time go from 0.13 -> 0.11. It is 
>> strange we have so different performances, but then again 0.3 and 0.4 are 
>> different beasts.
>>
>> Adding some calls to slice and another loop gained some perf for me. Can 
>> you try:
>>
>> https://gist.github.com/KristofferC/8a8ff33cb186183eea8d
>>
>> On Sunday, September 20, 2015 at 9:36:20 PM UTC+2, Daniel Carrera wrote:
>>>
>>> Just another note:
>>>
>>> I suspect that the `reshape()` might be the guilty party. I am just 
>>> guessing here, but I suspect that the reshape() forces a memory copy, while 
>>> a regular slice just creates kind of symlink to the original data. 
>>> Furthermore, I suspect that the memory copy would mean that when you try to 
>>> read from the newly created variable, you have to fetch it from RAM, 
>>> despite the fact that the CPU cache already has a perfectly good copy of 
>>> the same data.
>>>
>>> Cheers,
>>> Daniel.
>>>
>>>
>>> On 20 September 2015 at 21:25, Daniel Carrera  wrote:
>>>
 Whoo hoo! It looks like I got another ~6x or ~7x improvement. Using 
 Profile.print() I found that the hottest parts of the code appeared to be 
 the if-conditions, such as:

 if o < b_hist[j,3]

 It occurred to me that this could be due to cache misses, so I rewrote 
 the code to store the data more compactly:

 -  b_hist = reshape(sim.b[d, 1:t, i, :], t, 3)
 +  b_hist_1 = sim.b[d, 1:t, i, 1]
 +  b_hist_3 = sim.b[d, 1:t, i, 3]
 ...
 -if o < b_hist[j,3]
 +if o < b_hist_3[j]


 So, instead of an 3xN array, I store two 1xN arrays with the data I 
 actually want. I suspect that the biggest improvement is not that there is 
 1/3 less data, but that the data just gets managed differently. The upshot 
 is that now the program runs 208 times faster for me than it did 
 initially. 
 For me time execution time went from 45s to 0.2s.

 As always, the code is updated on Github:

 https://github.com/dcarrera/sim

 Cheers,
 Daniel.



 On 20 September 2015 at 20:51, Seth  wrote:

> As an interim step, you can also get text profiling information using 
> Profile.print() if the graphics aren't working.
>
> On Sunday, September 20, 2015 at 11:35:35 AM UTC-7, Daniel Carrera 
> wrote:
>>
>> Hmm... ProfileView gives me an error:
>>
>> ERROR: panzoom not defined
>>  in view at 
>> /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>>  in include at ./boot.jl:245
>>  in include_from_node1 at ./loading.jl:128
>> while loading /home/daniel/Projects/optimization/run_sim.jl, in 
>> expression starting on line 55
>>
>> Do I need to update something?
>>
>> Cheers,
>> Daniel.
>>
>> On 20 September 2015 at 20:28, Kristoffer Carlsson <
>> kcarl...@gmail.com> wrote:
>>
>>> https://github.com/timholy/ProfileView.jl is invaluable for 
>>> performance tweaking.
>>>
>>> Are you on 0.4?
>>>
>>> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan 
>>> Bouchet-Valat wrote:

 Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit 
 : 
 > 
 > 
 > On 20 September 2015 at 19:43, Kristoffer Carlsson < 
 > kcarl...@gmail.com> wrote: 
 > > Did you run the code twice to not time the JIT compiler? 
 > > 
 > > For me, my version runs in 0.24 and Daniels in 0.34. 
 > > 
 > > Anyway, adding this to Daniels version: 
 > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes 
 it 
 > > run in 0.13 seconds for me. 
 > > 
 > > 
 > 
 > Interesting. For me that change only makes a 10-20% improvement. 
 On 
 > my laptop the program takes about 1.5s which is similar to 
 Adam's. So 
 > I guess we are running on similar hardware and you are probably 
 using 
 > a faster desktop. In any case, I added the change and updated the 
 > repository: 
 > 
 > https://github.com/dcarrera/sim 
 > 
 > Is there a good way to profile Julia 

[julia-users] Re: Why does empty index returns first element?

2015-09-20 Thread Sisyphuss
If you are familiar with C, an array is nothing else than a pointer 
pointing the first element of an array. 

So `x[0]` is the first element (C is 0-based), `x[1]` is just the pointer 
moving forward by 1 unit. 
`x[]` has the same meaning as `x[0]`. 

Obviously, Julia inherits this convention.



On Saturday, September 19, 2015 at 7:42:57 PM UTC+2, Ismael VC wrote:
>
> I would have expected an error.
>
> Julia:
>
> julia> VERSION
> v"0.4.0-rc1"
>
> julia> x = [1:5;];
>
> julia> x[], x[1]
> (1,1)
>
> Python:
>
> >>> x = range(1, 6)
> >>> x[]
>   File "", line 1
> x[]
>   ^
> SyntaxError: invalid syntax
>
>
>

[julia-users] ERROR: curve_fit not defined

2015-09-20 Thread testertester
I am on Ubuntu, and my copy of julia was installed with apt-get install 
julia.

I was trying the curve fitting tutorial found here (
http://www.walkingrandomly.com/?p=5181) but I kept getting the error 
"curve_fit not defined". Yes, I have done "Pkg.add("Optim")" and 
"Pkg.add("LsqFit")", that doesn't help. Apparently, nobody else on the 
internet has ever had this problem. 
I'm
 
surprised.

Also, it doesn't seem like I can find the version number of Julia that I'm 
using. Following this 
(http://stackoverflow.com/questions/25326890/how-to-find-version-number-of-julia-is-there-a-ver-command)
 
just gives me the error "verbose not defined". Is every piece of 
documentation about Julia from earlier than 6/2015 invalid now? The 
responses I found here 
are pretty 
useless, as curve_fit doesn't even work by itself.


Re: [julia-users] META: What are the chances of moving this forum?

2015-09-20 Thread testertester
+1. Humans will be able to understand text no matter how it's served, so 
it's a bit pointless to obsess over gimmicks like GitHub's issue tracker. 
Switching discussion forums is always a huge pain.

On Thursday, September 10, 2015 at 1:37:01 PM UTC-4, Miguel Bazdresch wrote:
>
> Personally, I'd prefer to just have an old-fashioned, LISTERV-type mailing 
> list. Yes, I'm old.
>
> -- mb
>
> On Wed, Sep 9, 2015 at 7:34 AM, Nils Gudat  > wrote:
>
>> Was just thinking about this as first I had to try opening a thread here 
>> a couple of times before any posts were actually displayed, and then read 
>> through this 
>> 
>>  
>> thread on Mike's Juno Discuss forum, where a user had a question which was 
>> almost answered by the nice auto-suggestion feature that pops up when you 
>> ask a question.
>>
>> I feel that Google Groups has mostly downsides - the system is slow, with 
>> posts frequently not loading, double posts because of this, no proper code 
>> formatting or Markdown support etc.
>>
>> Is there a chance this could be moved to something like Discuss or is 
>> there too much inertia in having an "established" forum on Google Groups?
>>
>
>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Kristoffer Carlsson
https://github.com/timholy/ProfileView.jl is invaluable for performance 
tweaking.

Are you on 0.4?

On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat 
wrote:
>
> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit : 
> > 
> > 
> > On 20 September 2015 at 19:43, Kristoffer Carlsson < 
> > kcarl...@gmail.com > wrote: 
> > > Did you run the code twice to not time the JIT compiler? 
> > > 
> > > For me, my version runs in 0.24 and Daniels in 0.34. 
> > > 
> > > Anyway, adding this to Daniels version: 
> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it 
> > > run in 0.13 seconds for me. 
> > > 
> > > 
> > 
> > Interesting. For me that change only makes a 10-20% improvement. On 
> > my laptop the program takes about 1.5s which is similar to Adam's. So 
> > I guess we are running on similar hardware and you are probably using 
> > a faster desktop. In any case, I added the change and updated the 
> > repository: 
> > 
> > https://github.com/dcarrera/sim 
> > 
> > Is there a good way to profile Julia code? So I have been profiling 
> > by inserting tic() and toc() lines everywhere. On my computer 
> > @profile seems to do the same thing as @time, so it's kind of useless 
> > if I want to find the hot spots in a program. 
> Sure : 
> http://julia.readthedocs.org/en/latest/manual/profile/ 
>
>
> Regards 
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
Hmm... ProfileView gives me an error:

ERROR: panzoom not defined
 in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
 in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
 in include at ./boot.jl:245
 in include_from_node1 at ./loading.jl:128
while loading /home/daniel/Projects/optimization/run_sim.jl, in expression
starting on line 55

Do I need to update something?

Cheers,
Daniel.

On 20 September 2015 at 20:28, Kristoffer Carlsson 
wrote:

> https://github.com/timholy/ProfileView.jl is invaluable for performance
> tweaking.
>
> Are you on 0.4?
>
> On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat
> wrote:
>>
>> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit :
>> >
>> >
>> > On 20 September 2015 at 19:43, Kristoffer Carlsson <
>> > kcarl...@gmail.com> wrote:
>> > > Did you run the code twice to not time the JIT compiler?
>> > >
>> > > For me, my version runs in 0.24 and Daniels in 0.34.
>> > >
>> > > Anyway, adding this to Daniels version:
>> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it
>> > > run in 0.13 seconds for me.
>> > >
>> > >
>> >
>> > Interesting. For me that change only makes a 10-20% improvement. On
>> > my laptop the program takes about 1.5s which is similar to Adam's. So
>> > I guess we are running on similar hardware and you are probably using
>> > a faster desktop. In any case, I added the change and updated the
>> > repository:
>> >
>> > https://github.com/dcarrera/sim
>> >
>> > Is there a good way to profile Julia code? So I have been profiling
>> > by inserting tic() and toc() lines everywhere. On my computer
>> > @profile seems to do the same thing as @time, so it's kind of useless
>> > if I want to find the hot spots in a program.
>> Sure :
>> http://julia.readthedocs.org/en/latest/manual/profile/
>>
>>
>> Regards
>>
>


[julia-users] Re: How to version control ipynb (or derive them from markdown files)?

2015-09-20 Thread Sisyphuss
want to know the answer too.


On Sunday, September 20, 2015 at 9:37:36 PM UTC+2, Stephen Eglen wrote:
>
> Hi,
>
> How do people version control their ipynb files in a meaningful way?  Do 
> you strip out the output first before committing, as suggested here: 
> http://stackoverflow.com/questions/18734739/using-ipython-notebooks-under-version-control
> ?
>
> Or, alternatively, as I'm used to writing code in markdown, is there a 
> straightforward way to convert markdown with julia code chunks into a 
> ipython notebook?
> I found notedown (https://github.com/aaren/notedown), but couldn't see 
> that it works with Julia or R.
>
> Thanks, Stephen
>
>

Re: [julia-users] Re: printing UInt displays hex instead of decimal

2015-09-20 Thread Jesse Johnson
Thanks, I wasn't sure of the nomenclature. Why is print producing
different results for the 1D and 2D array?

On 09/20/2015 04:29 PM, Zheng Wendell wrote:
> `b` is a 2-dimensional array.
>
> On Sun, Sep 20, 2015 at 9:59 PM, Jesse Johnson
> > wrote:
>
> That is part of the inconsistency I was referring to. IMO a single
> default representation for a value should be used everywhere.
>
> Further, in 0.4rc1 there seems to be another, more serious
> inconsistency: decimal is being printed for UInt column vectors
> and hex for row vectors.
>
> a = UInt[]
> for n::UInt in 10:12
> push!(a, n)
> end
> b = UInt[10 11 12]
> c = UInt[10, 11, 12]
> d = UInt[n for n in 10:12]
>
> println(a)
> println(b)
> println(c)
> println(d)
>
> Output from CLI:
>
> UInt64[0x000a,0x000b,0x000c]
> UInt64[10 11 12]
> UInt64[0x000a,0x000b,0x000c]
> UInt64[0x000a,0x000b,0x000c]
>
> I am pretty sure this is a bug.
>
>
> On 09/17/2015 08:16 AM, Sisyphuss wrote:
>> This is not "printing" but "returned value"
>> Try `a[1]`, you get 0x0001
>> Try `print(a[1])`, you get 1
>>
>> So overload `print` if ever needed.
>>
>>
>> On Wednesday, September 16, 2015 at 11:34:33 PM UTC+2,
>> holocro...@gmail.com  wrote:
>>
>> In Julia 0.4rc1, when I create a UInt, either as an
>> individual value or array, and then print it hex values are
>> usually displayed instead of decimals. I say 'usually'
>> because the behavior changes a bit between REPL and
>>
>> For instance:
>>
>> |
>> julia>a =UInt[1234]
>> 1x4Array{UInt64,2}:
>>  0x0001 0x0002 … 0x0004
>> |
>>
>> This annoys me because 98% of the time I want the decimal
>> representation. Decimal is shown for Int, so why is hex the
>> default for UInt? Is it a bug?
>>
>
>



Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Adam
Thanks for the comments! Daniel and Kristoffer, I ran each of your code on 
my machine. Daniel's ran in 2.0 seconds on my laptop (~26% of time in gc); 
Kristoffer's ran in about 7 seconds on my laptop (~21% of time in gc). I'm 
not sure why Kristoffer's took so much longer to run for me than it did for 
him-- perhaps his machine is significantly better. Or maybe it's because 
I'm running in Juno and not from the command line?

As it stands, I can get the code to perform nearly as well as Matlab with 
Daniel's code and my latest 
version: https://gist.github.com/anonymous/cee196ee43cb9bf1c8b6 (note, when 
running I fixed the omission of "Prob_o" as an argument for "update_w", 
which is the right thing to do but saw no speed improvement). I think the 
primary difference is that Daniel defines types to store the parameters 
etc. (which more closely matches my original code), while my latest version 
passes all function arguments directly. On my laptop, these run in the 
neighborhood of 2-3 seconds (vs. 1.4 seconds for the Matlab code). 

Any thoughts on how I can beat the Matlab code? I would prefer to not (yet) 
get into possibilities like parallelization of the "i" for loop in 
"simulation.jl", since while I'm sure any parallelization would speed up 
the code, my impression is that Julia should be able to trump Matlab even 
before getting into anything like that. 

P.S.- In my latest version of the code (again, 
here: https://gist.github.com/anonymous/cee196ee43cb9bf1c8b6), in line 44 
of run_sim() I have:
#return s_array, w_array, b_array
when I uncomment that line and run the code, I receive an error saying 
s_array doesn't exit. Can someone tell me why? I checked, and the 
"simulation" function indeed creates s_array as output, so I'm not sure why 
"run_sim" won't return it. 


On Sunday, September 20, 2015 at 10:49:06 AM UTC-5, Daniel Carrera wrote:
>
>
> On 20 September 2015 at 17:39, STAR0SS  
> wrote:
>
>> The biggest problem right now is that Prob_o is global in calc_net. You 
>> need to pass it as an argument too. It's one of the drawback of having 
>> everything global by default, this kind of mistakes are sometimes hard to 
>> spot.
>>
>
> But... calc_net does not use Prob_o ... ?
>
>  
>
>>
>> Otherwise the  # get things in right dimensions for calculation below at 
>> line 55 is not necessary anymore.
>>
>> In these zeros(numO, 1) you don't need to put a 1, zeros(numO) gives you 
>> a vector of length numO, unlike in matlab where is gives you a matrix.
>>
>>
>
> I cleaned up the code and updated Github. No speed difference though.
>
> Cheers,
> Daniel.
>  
>
>

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Milan Bouchet-Valat
Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit :
> 
> 
> On 20 September 2015 at 19:43, Kristoffer Carlsson <
> kcarlsso...@gmail.com> wrote:
> > Did you run the code twice to not time the JIT compiler?
> > 
> > For me, my version runs in 0.24 and Daniels in 0.34.
> > 
> > Anyway, adding this to Daniels version: 
> > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it
> > run in 0.13 seconds for me.
> > 
> > 
> 
> Interesting. For me that change only makes a 10-20% improvement. On
> my laptop the program takes about 1.5s which is similar to Adam's. So
> I guess we are running on similar hardware and you are probably using
> a faster desktop. In any case, I added the change and updated the
> repository:
> 
> https://github.com/dcarrera/sim
> 
> Is there a good way to profile Julia code? So I have been profiling
> by inserting tic() and toc() lines everywhere. On my computer
> @profile seems to do the same thing as @time, so it's kind of useless
> if I want to find the hot spots in a program.
Sure :
http://julia.readthedocs.org/en/latest/manual/profile/


Regards


Re: [julia-users] Re: Julia 0.4 RC ppa?

2015-09-20 Thread Elliot Saba
Yep, Tony hit it on the head.  As the maintainer of the Ubuntu PPA, I
definitely understand the usefulness and ease of having Julia managed by
the system package manager, but unfortunately the build process is
relatively difficult to debug/fix; we have to jump through quite a few
hoops to get our source packages ready for building on the Canonical
servers, and problems often arise and must wait a few weeks before I can
fix them.

To give you an idea of the workflow we have setup right now, the first step
is that a job is run on the buildbots
 called
"package_launchpad", which gets run every time a commit passes the
automated testing and bundles that commit up into a form ready to be
submitted to launchpad.  The script that is run by that buildbot job is
right here
.
The results of running that script are saved on the buildbot website linked
above, here's a link to the latest run

which is the first to succeed in a while, due to some incorrect
configuration after I rebuilt the VM images a few weeks ago in preparation
for 0.4 releases.  One of the pieces of preparation that the script
performs is to embed a debian directory from this repository
, which gives the rules and
metadata necessary to build a Debian package for Julia.

As far as providing a `julia0.4` package, that is an interesting idea that
may be the best way forward, but unfortunately I have many other projects
that are vying for my attention right now.  If anyone reading this is
interested in pushing forward on that particular effort, even if you don't
have much experience working on this kind of stuff, please do not hesitate
to contact me to get more information on how to start, or just start
submitting pull requests/forking things.

In the meantime, just like Tony said, I think the best bet is to use the
generic linux tarballs for now.  In all honesty, there's really only one
concrete benefit to the PPA binaries (other than the simple purity of
having things managed by dpkg rather than being downloaded and installed to
user-directories) and that is automatic updates.

Now that I think about it; there's a possibility that a "dummy" .deb could
be created that just downloads the latest `.tar.gz`, unpacks it into
`/usr`, and calls it a day.
-E

On Sun, Sep 20, 2015 at 8:55 AM, Tony Kelman  wrote:

> Versioning the julia package name in the ppa would be a very good idea.
> The only reason the PPA is often out of date is that it's entirely
> maintained by a single person who doesn't always have the time to fix
> things that break or update things that would usually be handled
> automatically. As I said it takes more maintenance to keep running than the
> tarball builds, and since the PPA is Ubuntu-specific we've been encouraging
> people to use the generic tarballs now since we have more control over
> dependencies, public visibility to any issues that arise, and the ability
> for multiple people to fix them. I recognize the utility in having your
> system package manager handle updates, but it's a fair bit more maintenance
> work. Downloading and installing a tarball to use the binaries of Julia
> should be pretty easy, and doesn't need root access either.
>
>
> On Sunday, September 20, 2015 at 8:37:57 AM UTC-7, Glen O wrote:
>>
>> Is there a reason why the juliareleases ppa couldn't provide a julia0.4
>> package, separately from the currently julia package? I've seen similar
>> things done with packages elsewhere, including within the main ubuntu
>> repositories. Indeed, given the changes happening to the language, perhaps
>> it's a good idea to start keeping major versions of julia separate (that
>> is, make it julia0.3 and julia0.4, with julia being a dependency package
>> that will pull in the latest stable julia (ie/ it will point to julia0.3
>> until julia0.4 is properly released, then it will point to julia0.4).
>>
>> This also minimises issues for people who might have julia 0.3 currently
>> installed and are actively using it, and don't want to accidentally update
>> to 0.4 and have to alter all of their code to account for changes in the
>> language - they would just remove the dependency package, and be guaranteed
>> to remain with julia0.3 only.
>>
>> I do understand why it might be considered too much of a nuisance for the
>> relatively short RC period, when we can wait for the proper release, but
>> I'm probably not the only person who isn't up to using an in-development
>> version (nightlies), but is willing to use one that might just be slightly
>> buggy (release candidate), and who doesn't want to fiddle with installation
>> or compilation.
>>
>> On Monday, 21 September 2015 00:47:10 UTC+10, Tony Kelman wrote:
>>>
>>> 

Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Daniel Carrera
Oh, wait, I'm an idiot. I just figured out how to use @profile... :-P

On 20 September 2015 at 20:22, Daniel Carrera  wrote:

> Is there a good way to profile Julia code? So I have been profiling by
> inserting tic() and toc() lines everywhere. On my computer @profile seems
> to do the same thing as @time, so it's kind of useless if I want to find
> the hot spots in a program.
>
> Cheers,
> Daniel.
>
>


Re: [julia-users] Re: Julia code 5x to 30x slower than Matlab code

2015-09-20 Thread Tim Holy
Looks like you were getting a partial installation of code that needs julia 
0.4. Try Pkg.update() and it should downgrade you to an earlier version.

--Tim

On Sunday, September 20, 2015 08:35:28 PM Daniel Carrera wrote:
> Hmm... ProfileView gives me an error:
> 
> ERROR: panzoom not defined
>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileViewGtk.jl:32
>  in view at /home/daniel/.julia/v0.3/ProfileView/src/ProfileView.jl:51
>  in include at ./boot.jl:245
>  in include_from_node1 at ./loading.jl:128
> while loading /home/daniel/Projects/optimization/run_sim.jl, in expression
> starting on line 55
> 
> Do I need to update something?
> 
> Cheers,
> Daniel.
> 
> On 20 September 2015 at 20:28, Kristoffer Carlsson 
> 
> wrote:
> > https://github.com/timholy/ProfileView.jl is invaluable for performance
> > tweaking.
> > 
> > Are you on 0.4?
> > 
> > On Sunday, September 20, 2015 at 8:26:08 PM UTC+2, Milan Bouchet-Valat
> > 
> > wrote:
> >> Le dimanche 20 septembre 2015 à 20:22 +0200, Daniel Carrera a écrit :
> >> > On 20 September 2015 at 19:43, Kristoffer Carlsson <
> >> > 
> >> > kcarl...@gmail.com> wrote:
> >> > > Did you run the code twice to not time the JIT compiler?
> >> > > 
> >> > > For me, my version runs in 0.24 and Daniels in 0.34.
> >> > > 
> >> > > Anyway, adding this to Daniels version:
> >> > > https://gist.github.com/KristofferC/c19c0ccd867fe44700bd makes it
> >> > > run in 0.13 seconds for me.
> >> > 
> >> > Interesting. For me that change only makes a 10-20% improvement. On
> >> > my laptop the program takes about 1.5s which is similar to Adam's. So
> >> > I guess we are running on similar hardware and you are probably using
> >> > a faster desktop. In any case, I added the change and updated the
> >> > repository:
> >> > 
> >> > https://github.com/dcarrera/sim
> >> > 
> >> > Is there a good way to profile Julia code? So I have been profiling
> >> > by inserting tic() and toc() lines everywhere. On my computer
> >> > @profile seems to do the same thing as @time, so it's kind of useless
> >> > if I want to find the hot spots in a program.
> >> 
> >> Sure :
> >> http://julia.readthedocs.org/en/latest/manual/profile/
> >> 
> >> 
> >> Regards



[julia-users] Guideline for Function Documentation

2015-09-20 Thread Christof Stocker
So the website is (probably on purpose) not very specific on 
recommendations for how to document ones functions (stylewise) using the 
new doc system. However, since I am writing an enduser-facing library 
with a lot of parameters, I am wondering about what useful guidelines 
others here came up with that seem to work nicely for documentation.


I work a lot with R so naturally I have taken inspiration from how the R 
help docs are structured. However, now I wonder what would be a good way 
to list and describe all the named arguments of a function. Simply 
making bullet points doesn't seem like the most readable option


any opinions, references, or tips on that?


Re: [julia-users] Guideline for Function Documentation

2015-09-20 Thread Milan Bouchet-Valat
Le dimanche 20 septembre 2015 à 21:46 +0200, Christof Stocker a écrit :
> So the website is (probably on purpose) not very specific on 
> recommendations for how to document ones functions (stylewise) using
> the 
> new doc system. However, since I am writing an enduser-facing library
> with a lot of parameters, I am wondering about what useful guidelines
> others here came up with that seem to work nicely for documentation.
> 
> I work a lot with R so naturally I have taken inspiration from how
> the R 
> help docs are structured. However, now I wonder what would be a good
> way 
> to list and describe all the named arguments of a function. Simply 
> making bullet points doesn't seem like the most readable option
> 
> any opinions, references, or tips on that?
I suspect the manual doesn't say anything because there's no convention
yet. See:
https://github.com/JuliaLang/julia/issues/8966


Regards


Re: [julia-users] Re: printing UInt displays hex instead of decimal

2015-09-20 Thread Zheng Wendell
`b` is a 2-dimensional array.

On Sun, Sep 20, 2015 at 9:59 PM, Jesse Johnson 
wrote:

> That is part of the inconsistency I was referring to. IMO a single default
> representation for a value should be used everywhere.
>
> Further, in 0.4rc1 there seems to be another, more serious inconsistency:
> decimal is being printed for UInt column vectors and hex for row vectors.
>
> a = UInt[]
> for n::UInt in 10:12
> push!(a, n)
> end
> b = UInt[10 11 12]
> c = UInt[10, 11, 12]
> d = UInt[n for n in 10:12]
>
> println(a)
> println(b)
> println(c)
> println(d)
>
> Output from CLI:
>
> UInt64[0x000a,0x000b,0x000c]
> UInt64[10 11 12]
> UInt64[0x000a,0x000b,0x000c]
> UInt64[0x000a,0x000b,0x000c]
>
> I am pretty sure this is a bug.
>
>
> On 09/17/2015 08:16 AM, Sisyphuss wrote:
>
> This is not "printing" but "returned value"
> Try `a[1]`, you get 0x0001
> Try `print(a[1])`, you get 1
>
> So overload `print` if ever needed.
>
>
> On Wednesday, September 16, 2015 at 11:34:33 PM UTC+2,
> holocro...@gmail.com wrote:
>>
>> In Julia 0.4rc1, when I create a UInt, either as an individual value or
>> array, and then print it hex values are usually displayed instead of
>> decimals. I say 'usually' because the behavior changes a bit between REPL
>> and
>>
>> For instance:
>>
>> julia> a = UInt[1 2 3 4]
>> 1x4 Array{UInt64,2}:
>>  0x0001  0x0002  …  0x0004
>>
>> This annoys me because 98% of the time I want the decimal representation.
>> Decimal is shown for Int, so why is hex the default for UInt? Is it a bug?
>>
>
>


[julia-users] Re: What does the `|>` operator do? (possibly a Gtk.jl question)

2015-09-20 Thread Sisyphuss
This makes me think of double linear form!


On Sunday, September 20, 2015 at 11:19:16 AM UTC+2, STAR0SS wrote:
>
> From the help:
>
> help?> |>
> search: |>
>
> ..  |>(x, f)
>
> Applies a function to the preceding argument. This allows for easy 
> function chaining.
>
> .. doctest::
>
> julia> [1:5;] |> x->x.^2 |> sum |> inv
> 0.01818181818181818
>
> The implementation is quite simple:
>
> |>(x, f) = f(x)
>
> https://github.com/JuliaLang/julia/blob/master/base/operators.jl#L198
>


[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread Serge Santos
Thank you all for your inputs. It tried your suggestions and, 
unfortunately, it does not work. I tried Atom but, after a good start and 
some success, it keeps crashing in middle of a calculation (windows 10).

To summarize what I tried with Juno and julia 0.3.11:
- Compat v.0.7.0 (pinned)
- JuliaParser V0.1.2  (pinned)
- Jewel v1.0.6.

I get a first error message, which seems to indicate that Julia cannot 
generate an output.

*symbol could not be found jl_generating_output (-1): The specified 
procedure could not be found.*

Followed by:

*WARNING: LightTable.jl: `skipws` has no method matching 
skipws(::TokenStream)*
* in scopes at C:\Users\Serge\.julia\v0.3\Jewel\src\parse\scope.jl:148*
* in codemodule at C:\Users\Serge\.julia\v0.3\Jewel\src\parse/parse.jl:141*
* in getmodule at C:\Users\Serge\.julia\v0.3\Jewel\src\eval.jl:42*
* in anonymous at 
C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable\eval.jl:51*
* in handlecmd at 
C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:65*
* in handlenext at 
C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:81*
* in server at 
C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:22*
* in server at C:\Users\Serge\.julia\v0.3\Jewel\src\Jewel.jl:18*
* in include at boot.jl:245*
* in include_from_node1 at loading.jl:128*
* in process_options at client.jl:285*
* in _start at client.jl:354*

Looking at dependencies with MetadataTools, I smply got:* nothing.* I 
assume that Jewel does not have any dependencies.

I have a lot of understanding for the effort that goes into making the 
Julia project work and a success, but I just lost two days of my life 
trying to make things work and I have an important deadline ahead that I am 
likely to miss because I relied on a promising tool that, unfortunately, 
does not seem robust enough at this stage given the amount of development 
happening.It makes me wonder if I should not wait until Julia becomes more 
established and robust and switch to other solutions.

On Sunday, 20 September 2015 16:47:05 UTC+1, Dongning Guo wrote:
>
> In case you're stuck, this may be a way out:
> I installed Atom editor and it seems Julia (v0.5??? nightly build)
> works with it after installing a few packages.  I'm learning to use
> the new environment ...
> See https://github.com/JunoLab/atom-julia-client/tree/master/manual
>
>
> On Sunday, September 20, 2015 at 6:59:02 AM UTC-5, Michael Hatherly wrote:
>>
>> I can’t see LightTable listed in Pkg.status() output in either PC
>>
>> The LightTable module is part of the Jewel package is seems, 
>> https://github.com/one-more-minute/Jewel.jl/blob/fb854b0a64047ee642773c0aa824993714ee7f56/src/Jewel.jl#L22,
>>  
>> and so won’t show up on Pkg.status() output since it’s not a true 
>> package by itself. Apologies for the misleading directions there.
>>
>> What other packages would Juno depend on?
>>
>> You can manually walk through the REQUIRE files to see what Jewel depends 
>> on, or use MetadataTools to do it:
>>
>> julia> using MetadataTools
>> julia> pkgmeta = get_all_pkg();
>> julia> graph = make_dep_graph(pkgmeta);
>> julia> deps = get_pkg_dep_graph("Jewel", graph);
>> julia> map(println, keys(deps.p_to_i));
>>
>> You shouldn’t need to change versions for most, if any, of what’s listed 
>> though. (Don’t forget to call Pkg.free on each package you pin once 
>> newer versions of the packages are tagged.) Compat 0.7.1 should be far 
>> enough back I think.
>>
>> — Mike
>> ​
>> On Sunday, 20 September 2015 13:29:38 UTC+2, Greg Plowman wrote:
>>>
>>> OK I see that second latest tag is v0.1.2 (17 June 2014). Seems a 
>>> strange jump.
>>>
>>> But now I understand pinning, I can use a strategy of rolling back 
>>> Juno-related packages until Juno works again.
>>>
>>> What other packages would Juno depend on?
>>>
>>> To help me in this endeavour, I have access to another PC on which Juno 
>>> runs (almost) without error.
>>> Confusingly, Pkg.status() reports JuliaParser v0.6.2 on this second PC
>>> Jewel is v1.0.6 on both PCs.
>>> I can't see LightTable listed in Pkg.status() output in either PC
>>>
>>> I think Compat v0.7.2 is also causing ERROR: @doc not defined issue (
>>> https://groups.google.com/forum/#!topic/julia-users/rsM4hxdkAxg)
>>> so maybe reverting back to Compat v0.7.0 might also help.
>>>
>>> -- Greg
>>>
>>>
>>> On Sunday, September 20, 2015 at 7:08:47 PM UTC+10, Michael Hatherly 
>>> wrote:
>>>
 Before this JuliaParser was at version v0.6.3, are you sure we should 
 try reverting to v0.1.2?

 See the tagged versions 
 https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the 
 next latest tagged version. You could probably checkout a specific commit 
 prior to the commit that’s causing the breakage instead though.

 What version of Jewel.jl and LightTable.jl are you using?

 — Mike
 ​

 On Sunday, 20 September 2015 10:56:22 UTC+2, Greg Plowman wrote:
>
> Hi,
>
> I tried 

  1   2   >