[julia-users] Re: Passing data through Optim

2015-09-29 Thread Tomas Lycken
The performance penalty from global variables comes from the fact that 
non-const global variables are type unstable, which means that the compiler 
has to generate very defensive, and thus slow, code. The same thing is, for 
now, true also for anonymous functions and functions passed as parameters 
to other functions, so switching one paradigm for the other will 
unfortunately not help much for performance. I haven't looked closely at 
your code, but the 
[FastAnonymous.jl](https://github.com/timholy/FastAnonymous.jl) package can 
probably help with making the anonymous function version faster.

If the data you want to pass along doesn't change, using *const* globals is 
fine (at least from a performance perspective, you might have other reasons 
to avoid them too...), i.e. you can do the following and it will also be 
faster:

```
const data = 3

function model(x)
# use data somehow
end

# use the model function in your optimization
```

If you need the data to change, you can still use a `const` global 
variable, but set it to some mutable (and type-stable!) object, for example

```
type MyData
data::Int
end

const data = MyData(3)

function model(x)
data.data += 1
# ...
end

# use the model function in your optimization
# it can now modify the global data on each invocation
```

This doesn't necessarily provide the cleanest interface, but it should be 
more performant than your current solution.

// T


On Monday, September 28, 2015 at 6:53:22 PM UTC+2, Christopher Fisher wrote:
>
> Thanks for your willingness to investigate the matter more closely. I 
> cannot post the exact code I am using (besides its rather long). However, I 
> posted a toy example that follows the same basic operations. Essentially, 
> my code involves an outer function (SubLoop) that loops through a data set 
> with multiple subjects. The model is fit to each subject's data. The other 
> function (LogLike) computes the log likelihood and is called by optimize. 
> The first set of code corresponds to the closure method and the second set 
> of code corresponds to the global variable method. In both cases, the code 
> executed in about .85 seconds over several runs on my computer and has 
> about 1.9% gc time. I'm also aware that my code is probably not optimized 
> in other regards. So I would be receptive to any other advice you might 
> have. 
>
>
>  
>
> using Distributions,Optim
>
> function SubLoop1(data1)
>
> function LogLike1(parms) 
>
> L = pdf(Normal(parms[1],exp(parms[2])),SubData)
>
> LL = -sum(log(L))
>
> end
>
> #Number of Subjects
>
> Nsub = size(unique(data1[:,1],1),1)
>
> #Initialize per subject Data
>
> SubData = []
>
> for i = 1:Nsub
>
> idx = data1[:,1] .== i
>
> SubData = data1[idx,2]
>
> parms0 = [1.0;1.0]
>
> optimize(LogLike1,parms0,method=:nelder_mead)
>
> end
>
> end
>
>  
>
> N = 10^5
>
> #Column 1 subject index, column 2 value
>
> Data = zeros(N*2,2)
>
> for sub = 1:2
>
> Data[(N*(sub-1)+1):(N*sub),:] = [sub*ones(N) rand(Normal(10,2),N)]
>
> end
>
> @time SubLoop1(Data)
>
>
>
>
>
>
>
>
>
>
>
> Using Distributions, Optim
>
> function SubLoop2(data1)
>
> global SubData
>
> #Number of subjects
>
> Nsub = size(unique(data1[:,1],1),1)
>
> #Initialize per subject data
>
> SubData = []
>
> for i = 1:Nsub
>
> idx = data1[:,1] .== i
>
> SubData = data1[idx,2]
>
> parms0 = [1.0;1.0]
>
> optimize(LogLike2,parms0,method=:nelder_mead)
>
> end
>
> end
>
>  
>
> function LogLike2(parms) 
>
> L = pdf(Normal(parms[1],exp(parms[2])),SubData)
>
> LL = -sum(log(L))
>
> end
>
>  
>
> N = 10^5
>
> #Column 1 subject index, column 2 value
>
> Data = zeros(N*2,2)
>
> for sub = 1:2
>
> Data[(N*(sub-1)+1):(N*sub),:] = [sub*ones(N) rand(Normal(10,2),N)]
>
> end
>
> @time SubLoop2(Data)
>
>
>
> On Monday, September 28, 2015 at 11:24:13 AM UTC-4, Kristoffer Carlsson 
> wrote:
>>
>> From only that comment alone it is hard to give any further advice. 
>>
>> What overhead are you seeing?
>>
>> Posting runnable code is the best way to get help.
>>
>>

Re: [julia-users] How to use variables in subsets of DataFrames ?

2015-09-29 Thread Fred
Great answer Rob ! This is exactly what I was looking for !

Many Thanks !

Fred

Le lundi 28 septembre 2015 17:35:15 UTC+2, Rob J Goedman a écrit :
>
> Fred,
>
> Would  below example work for you?
>
> *julia> **dft = readtable("/Users/rob/Desktop/test.csv", separator = 
> '\t')*
> *8x5 DataFrames.DataFrame*
> *| Row | title1 | title2 | title3 | title4 | title5 |*
> *|-||||||*
> *| 1   | 10 | 20 | 30 | 40 | 50 |*
> *| 2   | 11 | 21 | 31 | 41 | 51 |*
> *| 3   | 12 | 22 | 32 | 42 | 52 |*
> *| 4   | 13 | 23 | 33 | 43 | 53 |*
> *| 5   | 14 | 24 | 34 | 44 | 54 |*
> *| 6   | 15 | 25 | 35 | 45 | 55 |*
> *| 7   | 16 | 26 | 36 | 46 | 56 |*
> *| 8   | 17 | 27 | 37 | 47 | 57 |*
>
> *julia> **titles = names(dft)*
> *5-element Array{Symbol,1}:*
> * :title1*
> * :title2*
> * :title3*
> * :title4*
> * :title5*
>
> *julia> **dft[[2:6], titles[2:5]]*
> *5x4 DataFrames.DataFrame*
> *| Row | title2 | title3 | title4 | title5 |*
> *|-|||||*
> *| 1   | 21 | 31 | 41 | 51 |*
> *| 2   | 22 | 32 | 42 | 52 |*
> *| 3   | 23 | 33 | 43 | 53 |*
> *| 4   | 24 | 34 | 44 | 54 |*
> *| 5   | 25 | 35 | 45 | 55 |*
>
> *julia> **dft[[2:6], titles[3]]*
> *5-element DataArrays.DataArray{Int64,1}:*
> * 31*
> * 32*
> * 33*
> * 34*
> * 35*
>
>
> If you use list comprehension you will need the extra Symbol[] construct:
>
> *julia> **dft[:, Symbol[titles[i] for i in 2:3]]*
> *8x2 DataFrames.DataFrame*
> *| Row | title2 | title3 |*
> *|-|||*
> *| 1   | 20 | 30 |*
> *| 2   | 21 | 31 |*
> *| 3   | 22 | 32 |*
> *| 4   | 23 | 33 |*
> *| 5   | 24 | 34 |*
> *| 6   | 25 | 35 |*
> *| 7   | 26 | 36 |*
> *| 8   | 27 | 37 |*
>
>
> Regards,
> Rob
>
> On Sep 28, 2015, at 2:18 AM, Fred  
> wrote:
>
> Hi !
> I would like to know how is it possible to use variables in subsets of 
> DataFrames ? I would like to use a syntax like
>  df[:,:titles[1]] and df[:,:titles[1:3]] 
>
> Thanks for your help !
>
>
> julia> using DataFrames
>
>
> julia> df = readtable("test.csv", separator = '\t')
> 8x5 DataFrame
> | Row | title1 | title2 | title3 | title4 | title5 |
> |-||||||
> | 1   | 10 | 20 | 30 | 40 | 50 |
> | 2   | 11 | 21 | 31 | 41 | 51 |
> | 3   | 12 | 22 | 32 | 42 | 52 |
> | 4   | 13 | 23 | 33 | 43 | 53 |
> | 5   | 14 | 24 | 34 | 44 | 54 |
> | 6   | 15 | 25 | 35 | 45 | 55 |
> | 7   | 16 | 26 | 36 | 46 | 56 |
> | 8   | 17 | 27 | 37 | 47 | 57 |
>
>
> julia> titles = readdlm("titles.csv")
> 3x1 Array{Any,2}:
>  "title3"
>  "title1"
>  "title5"
> 
>
> julia> df[:,:title2]
> 8-element DataArray{Int64,1}: 
> 
>  20   
> 
>  21   
> 
>  22   
> 
>  23   
> 
>  24   
> 
>  25   
> 
>  26   
> 
>  27   
> 
>   
> 
> julia> titles[1]
> "title3" 
>  
>   
> 
> julia> df[:,:titles[1]]
> ERROR: `getindex` has no method matching getindex(::Symbol, ::Int64)   
>
> julia> df[:,:titles[1:3]]
> ERROR: `getindex` has no method matching getindex(::Symbol, ::UnitRange{
> Int64}) 
>
>
> 
> 
>
>
>

[julia-users] Re: How to create an array of a given type?

2015-09-29 Thread Michael Hatherly


This is probably https://github.com/JunoLab/atom-julia-client/issues/44 — 
just a display issue with arrays containing #undef. Not sure when someone 
will have a chance to fix it. Until someone does you'll need to avoid 
trying to display results containing undefined references. If you’d like to 
try fix it then a good place to start looking is the 
https://github.com/JunoLab/Atom.jl/tree/master/src/display folder.
​

— Mike
On Tuesday, 29 September 2015 06:17:11 UTC+2, Dongning Guo wrote:
>
> I'd like to create an array of a certain type.  The basic code looks as 
> follows:
>
> Type T
>   x::Float64
>   y::Float64
> end
> b = Array{T}(3)
>
> Basically I wish  b[1], b[2], b[3] to be of the same type T.
>
> I'm using Julia within the Atom editor.  The last line of code causes 
> Julia to hang and produces the following error "Julia Client – Internal 
> Error
> UndefRefError: access to undefined reference
> in yieldto at 
> /Applications/Julia-0.4.0-rc2.app/Contents/Resources/julia/lib/julia/sys.dylib
> in wait at 
> /Applications/Julia-0.4.0-rc2.app/Contents/Resources/julia/lib/julia/sys.dylib
>  
> (repeats 3 times)
> [inlined code] from /Users/me/.julia/v0.4/Lazy/src/dynamic.jl:69
> in anonymous at /Users/me/.julia/v0.4/Atom/src/eval.jl:44
> [inlined code] from /Users/me/.julia/v0.4/Atom/src/comm.jl:23
> in anonymous at task.jl:63
>
>
> The same code works just fine under command line Julia (0.4.0-rc2).
>
> Any help will be appreciated, including how to have a clean slate 
> reinstallation (either Juno or Atom) that just works.
>
> I'm relatively new to Julia.  I was using Julia under Juno until Juno 
> broke down with no fix after I tried.  Reinstallation didn't help (several 
> other people had similar problems, see 
> https://groups.google.com/d/topic/julia-users/ntOb9HNm0ac/discussion).  I 
> then switched to Atom and still has problems.
>
>

[julia-users] Re: Passing data through Optim

2015-09-29 Thread Kristoffer Carlsson
In my experience a closure is much faster than a function that accesses 
global variables even if it is passed to another function as an argument.

Two different examples. Here we pass a function that accesses (non const) 
globals to another function:

a = 3

g(f::Function, b, N) = f(b, N)

function f_glob(b, N)
s = 0
for i = 1:N
s += a
end
return s
end

@time g(f_glob, 3, 10^6)

300

0.020655 seconds (999.84 k allocations: 15.256 MB, 18.40% gc time)


Here we instead use an outer function that generates the desired closure 
and pass the closure as the function argument:

function f_clos(b, N, a)
function f_inner(b, N)
s = 0
for i = 1:N
s += a
end
return s
end

g(f_inner)
end

@time f_clos(3, 10^6, 2)

200

0.21 seconds (19 allocations: 928 bytes)




On Tuesday, September 29, 2015 at 8:47:06 AM UTC+2, Tomas Lycken wrote:
>
> The performance penalty from global variables comes from the fact that 
> non-const global variables are type unstable, which means that the compiler 
> has to generate very defensive, and thus slow, code. The same thing is, for 
> now, true also for anonymous functions and functions passed as parameters 
> to other functions, so switching one paradigm for the other will 
> unfortunately not help much for performance. I haven't looked closely at 
> your code, but the [FastAnonymous.jl](
> https://github.com/timholy/FastAnonymous.jl) package can probably help 
> with making the anonymous function version faster.
>
> If the data you want to pass along doesn't change, using *const* globals 
> is fine (at least from a performance perspective, you might have other 
> reasons to avoid them too...), i.e. you can do the following and it will 
> also be faster:
>
> ```
> const data = 3
>
> function model(x)
> # use data somehow
> end
>
> # use the model function in your optimization
> ```
>
> If you need the data to change, you can still use a `const` global 
> variable, but set it to some mutable (and type-stable!) object, for example
>
> ```
> type MyData
> data::Int
> end
>
> const data = MyData(3)
>
> function model(x)
> data.data += 1
> # ...
> end
>
> # use the model function in your optimization
> # it can now modify the global data on each invocation
> ```
>
> This doesn't necessarily provide the cleanest interface, but it should be 
> more performant than your current solution.
>
> // T
> 
>
> On Monday, September 28, 2015 at 6:53:22 PM UTC+2, Christopher Fisher 
> wrote:
>>
>> Thanks for your willingness to investigate the matter more closely. I 
>> cannot post the exact code I am using (besides its rather long). However, I 
>> posted a toy example that follows the same basic operations. Essentially, 
>> my code involves an outer function (SubLoop) that loops through a data set 
>> with multiple subjects. The model is fit to each subject's data. The other 
>> function (LogLike) computes the log likelihood and is called by optimize. 
>> The first set of code corresponds to the closure method and the second set 
>> of code corresponds to the global variable method. In both cases, the code 
>> executed in about .85 seconds over several runs on my computer and has 
>> about 1.9% gc time. I'm also aware that my code is probably not optimized 
>> in other regards. So I would be receptive to any other advice you might 
>> have. 
>>
>>
>>  
>>
>> using Distributions,Optim
>>
>> function SubLoop1(data1)
>>
>> function LogLike1(parms) 
>>
>> L = pdf(Normal(parms[1],exp(parms[2])),SubData)
>>
>> LL = -sum(log(L))
>>
>> end
>>
>> #Number of Subjects
>>
>> Nsub = size(unique(data1[:,1],1),1)
>>
>> #Initialize per subject Data
>>
>> SubData = []
>>
>> for i = 1:Nsub
>>
>> idx = data1[:,1] .== i
>>
>> SubData = data1[idx,2]
>>
>> parms0 = [1.0;1.0]
>>
>> optimize(LogLike1,parms0,method=:nelder_mead)
>>
>> end
>>
>> end
>>
>>  
>>
>> N = 10^5
>>
>> #Column 1 subject index, column 2 value
>>
>> Data = zeros(N*2,2)
>>
>> for sub = 1:2
>>
>> Data[(N*(sub-1)+1):(N*sub),:] = [sub*ones(N) rand(Normal(10,2),N)]
>>
>> end
>>
>> @time SubLoop1(Data)
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Using Distributions, Optim
>>
>> function SubLoop2(data1)
>>
>> global SubData
>>
>> #Number of subjects
>>
>> Nsub = size(unique(data1[:,1],1),1)
>>
>> #Initialize per subject data
>>
>> SubData = []
>>
>> for i = 1:Nsub
>>
>> idx = data1[:,1] .== i
>>
>> SubData = data1[idx,2]
>>
>> parms0 = [1.0;1.0]
>>
>> optimize(LogLike2,parms0,method=:nelder_mead)
>>
>> end
>>
>> end
>>
>>  
>>
>> function LogLike2(parms) 
>>
>> L = pdf(Normal(parms[1],exp(parms[2])),SubData)
>>
>> LL = -sum(log(L))
>>
>> end
>>
>>  
>>
>> N = 10^5
>>
>> #Column 1 subject index, column 2 value
>>
>> Data = zeros(N*2,2)
>>
>> for sub = 1:2
>>
>> Data[(N*(sub-1)+1):(N*sub),:] = 

[julia-users] Escher image

2015-09-29 Thread Yakir Gagnon


How do you show a local image (say it’s in the directory where you ran escher 
--serve)?
​


[julia-users] Is it possible to "precompile" during the "__precompile__"?

2015-09-29 Thread Sheehan Olver

I'm wondering if its possible to have functions precompiled for specific 
types during the __precompile__ stage, so that this does not need to be 
repeated for each using?

Gadfly seems to do this by defining a _precompile_() function, but it's 
unclear whether that's compiling during the __precompile__ stage, vs during 
the loading stage.  


[julia-users] Re: suppress deprecation warnings

2015-09-29 Thread David van Leeuwen
Hello, 

On Thursday, September 17, 2015 at 7:28:10 PM UTC+2, Michael Francis wrote:
>
> How can I suppress the printing of deprecation warnings in 0.4? I need to 
> temporarily suppress these so that I can see the wood for the trees. 
>

I think I have the same request: is there a way to disable warnings in 
included packages, so that I can more easily find the deprecation warnings 
in my own code?  There are lots of these for julia 0.4-rc in popular 
packages like DataFrames and NumericFuns.  Something like:

Module MyModule

Global.warming = False ## oops, typo
using FossilFuels
using TeaParty
Global.warming = True

## MyModule code starts here

Cheers, 

---david


[julia-users] Re: Is it possible to "precompile" during the "__precompile__"?

2015-09-29 Thread Kristoffer Carlsson
These are precompiled "once and for all" and are recompiled when the source 
changes.

Relevant package:
https://github.com/timholy/SnoopCompile.jl 

On Tuesday, September 29, 2015 at 8:16:36 AM UTC+2, Sheehan Olver wrote:
>
>
> I'm wondering if its possible to have functions precompiled for specific 
> types during the __precompile__ stage, so that this does not need to be 
> repeated for each using?
>
> Gadfly seems to do this by defining a _precompile_() function, but it's 
> unclear whether that's compiling during the __precompile__ stage, vs during 
> the loading stage.  
>


Re: [julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Scott Jones
Yes, one of the things that won me over was that I could write many fewer 
lines of code that I'd have to do in C/C++/Java (and that was when I was 
still a total novice - which I still am in many ways, but getting better).
Python has been successful IMO because it's easy to quickly get stuff up 
and running - not like C/C++/Java, and Julia can be the same.

I like Tom's idea - I'd maybe rank things based on: getting correct results 
(most important), time to do the programming, lines of code (that can have 
a big effect on maintainability) [might have to be some rules though, so 
people don't put everything on one line, maybe count statements, not 
lines]), and execution time.

Code that doesn't get 100% correct results would not even be considered, 
and the other 3 metrics would be weighted, to determine the winner(s).

On Tuesday, September 29, 2015 at 9:18:15 AM UTC-4, Tom Breloff wrote:
>
> One of the recent "why is my code slow" posts from a new user had a 
> statement that was something along the lines of "sure it's still 4-5 times 
> faster than python, but I expected it to be much faster".  I think this 
> sums it up... new users hear of Julia's blazing speed and expect it to be 
> as fast as highly optimized C/Fortran with absolutely zero effort.  
>
> I think a great sales pitch for Julia could come in the form of a 
> contest... pick several programming problems, and choose your language.  
> For each problem, you get 1 point for finishing it first (correctly), and 1 
> point for the fastest execution time.  What language would you choose?  
>
> On Tue, Sep 29, 2015 at 8:56 AM, Tomas Lycken  > wrote:
>
>> I think, this may be a little unfair to Julia.
>>
>> I agree that it’s unfair - but new users are seldom fair. What I meant to 
>> get at was not that Julia code by an inexperienced programmer is worse than 
>> anything else, but just that since Julia *can* be so fast, I think 
>> there’s a big risk that the gap between potential and actual throughput in 
>> many “first real Julia program”s becomes an annoyance, even if Julia has a 
>> the potential to reach further than Python ever could. In other words, it’s 
>> not Julia’s actual performance that is the problem, but the way it differs 
>> from expectations.
>>
>> // T
>>
>> On Tuesday, September 29, 2015 at 2:50:14 PM UTC+2, Páll Haraldsson wrote:
>>
>> On Tuesday, September 29, 2015 at 10:20:18 AM UTC, Tomas Lycken wrote:

 One thing Python does well, which Julia doesn't (yet) succeed in, is 
 make it easy to start coding from zero experience and get something that 
 executes "well enough"

>>>
>>> I think, this may be a little unfair to Julia. Python (and Perl) 
>>> succeeded not because it is fast (only because it has fast development 
>>> times - to begin with at least). Julia is expected to be fast, compared to 
>>> C, Python isn't. Even if Julia were as slow as Python, it seems to be a 
>>> better language - more maintainable, as more static and doesn't really have 
>>> duck-typing (and Hoare's "billion dollar mistake"), right?
>>>
>>> (although, as always with first-time coders, code organization and 
 readability might still leave some things to wish for...).

 In Julia, it's very much possible to get something to run, but the 
 performance differences between well-written and not-so-well-written code 
 are *huge*.

>>>
>>> Right, but even your code that is written to be slow should work (no 
>>> segfaults..) and not be slower than Python? At least, those would be 
>>> exceptional cases, all that I would like to know about as I see no good 
>>> reason for it.
>>>
>>> The way I see it, yes, type-instability makes your code way slower, but 
>>> isn't that (roughly) the same as in Python, where all code is 
>>> "type-unstable", because of duck-typing? That is or was at least the 
>>> default. Now PyPy and Numba etc. helps (that didn't exist in the 
>>> beginning/10 years ago)
>>>
>>> This means that most users will show their code to someone who knows 
 more than they do, and more likely than not get a first reaction along the 
 lines of "everything you do is wrong". Although the statement isn't 
 untrue, 
 it's very off-putting - and even more off-putting is the fact that there's 
 a lot of "computer sciencey" stuff you need to understand to be able to 
 grasp

>>>
>>> Only for fast code, not "correct" code. And 90% of code isn't 
>>> speed-critical. To a beginner this seems very good. First you learn the 
>>> basics. You can prototype and later optimize ("premature optimization is 
>>> the root of all evil"?).
>>>
>>> *why* you did it wrong (type stability, difference between abstract and 
 leaf types, difference between anonymous and named functions etc).

 Don't get me wrong - I think Julia is doing a lot of things right, and 
 I'm glad that these "CS-y" questions are asked and handled up-front: this 
 is what gives 

Re: [julia-users] Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Mauro
I don't think that you need nor that you should use generated functions.
But maybe I'm wrong, what are you trying to achieve?  This should work
as you want:

function testfun2!{N}(X,Y::NTuple{N,Float64})
for i in eachindex(X), j in 1:N # much better to have the loop this way
X[i][j] = Y[j]
end
return X
end

# Setup for function call
InnerArrayPts = 3
OuterArrayPts = 10
Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
Yinput = rand(InnerArrayPts)

testfun2!(Xinput,tuple(Yinput...))

On Tue, 2015-09-29 at 13:20, Alan Crawford  wrote:
> I would like to preallocate memory of an array of arrays and pass it to a
> function to be filled in. I have created an example below that illustrates
> my question(s).
>
> Based on my (probably incorrect) understanding that it would be desirable
> to fix the type in my function, I would like to be able to pass my array of
> arrays, X, in a type stable way.  However, I can't seem to pass
> Array{Array{Float64,N},1}. If, however, i do not attempt to impose the type
> on the function, it works.
>
> Is there a way to pass Array{Array{Float64,N},1 to my function? Do I even
> need to fix the type in the function to get good performance?
>
> # version of Julia: 0.4.0-rc3
>
> @generated function
> testfun1!{N}(X::Array{Array{Float64,1},1},Y::NTuple{N,Float64})
> quote
> for j in 1:$N, i in eachindex(X)
> X[i][j] = Y[j]
> end
> return X
> end
> end
>
> @generated function testfun2!{N}(X,Y::NTuple{N,Float64})
> quote
> for j in 1:$N, i in eachindex(X)
> X[i][j] = Y[j]
> end
> return X
> end
> end
>
> # Setup for function call
> InnerArrayPts = 3
> OuterArrayPts = 10
> Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
> Yinput = rand(InnerArrayPts)
>
> # Method Error Problem
> testfun1!(Xinput,tuple(Yinput...))
>
> # This works
> testfun2!(Xinput,tuple(Yinput...))
>
> I also tried with the following version of testfun1!() and again got a
> method error.
>
> @generated function
> testfun1!{N}(X::Array{Array{Float64,N},1},Y::NTuple{N,Float64})
> quote
> for j in 1:$N, i in eachindex(X)
> X[i][j] = Y[j]
> end
> return X
> end
> end
>
>
> I am sure i am misunderstanding something quite fundamental and/or missing
> something straightforward...
>
> Thanks,
> Alan


Re: [julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Tom Breloff
One of the recent "why is my code slow" posts from a new user had a
statement that was something along the lines of "sure it's still 4-5 times
faster than python, but I expected it to be much faster".  I think this
sums it up... new users hear of Julia's blazing speed and expect it to be
as fast as highly optimized C/Fortran with absolutely zero effort.

I think a great sales pitch for Julia could come in the form of a
contest... pick several programming problems, and choose your language.
For each problem, you get 1 point for finishing it first (correctly), and 1
point for the fastest execution time.  What language would you choose?

On Tue, Sep 29, 2015 at 8:56 AM, Tomas Lycken 
wrote:

> I think, this may be a little unfair to Julia.
>
> I agree that it’s unfair - but new users are seldom fair. What I meant to
> get at was not that Julia code by an inexperienced programmer is worse than
> anything else, but just that since Julia *can* be so fast, I think
> there’s a big risk that the gap between potential and actual throughput in
> many “first real Julia program”s becomes an annoyance, even if Julia has a
> the potential to reach further than Python ever could. In other words, it’s
> not Julia’s actual performance that is the problem, but the way it differs
> from expectations.
>
> // T
>
> On Tuesday, September 29, 2015 at 2:50:14 PM UTC+2, Páll Haraldsson wrote:
>
> On Tuesday, September 29, 2015 at 10:20:18 AM UTC, Tomas Lycken wrote:
>>>
>>> One thing Python does well, which Julia doesn't (yet) succeed in, is
>>> make it easy to start coding from zero experience and get something that
>>> executes "well enough"
>>>
>>
>> I think, this may be a little unfair to Julia. Python (and Perl)
>> succeeded not because it is fast (only because it has fast development
>> times - to begin with at least). Julia is expected to be fast, compared to
>> C, Python isn't. Even if Julia were as slow as Python, it seems to be a
>> better language - more maintainable, as more static and doesn't really have
>> duck-typing (and Hoare's "billion dollar mistake"), right?
>>
>> (although, as always with first-time coders, code organization and
>>> readability might still leave some things to wish for...).
>>>
>>> In Julia, it's very much possible to get something to run, but the
>>> performance differences between well-written and not-so-well-written code
>>> are *huge*.
>>>
>>
>> Right, but even your code that is written to be slow should work (no
>> segfaults..) and not be slower than Python? At least, those would be
>> exceptional cases, all that I would like to know about as I see no good
>> reason for it.
>>
>> The way I see it, yes, type-instability makes your code way slower, but
>> isn't that (roughly) the same as in Python, where all code is
>> "type-unstable", because of duck-typing? That is or was at least the
>> default. Now PyPy and Numba etc. helps (that didn't exist in the
>> beginning/10 years ago)
>>
>> This means that most users will show their code to someone who knows more
>>> than they do, and more likely than not get a first reaction along the lines
>>> of "everything you do is wrong". Although the statement isn't untrue, it's
>>> very off-putting - and even more off-putting is the fact that there's a lot
>>> of "computer sciencey" stuff you need to understand to be able to grasp
>>>
>>
>> Only for fast code, not "correct" code. And 90% of code isn't
>> speed-critical. To a beginner this seems very good. First you learn the
>> basics. You can prototype and later optimize ("premature optimization is
>> the root of all evil"?).
>>
>> *why* you did it wrong (type stability, difference between abstract and
>>> leaf types, difference between anonymous and named functions etc).
>>>
>>> Don't get me wrong - I think Julia is doing a lot of things right, and
>>> I'm glad that these "CS-y" questions are asked and handled up-front: this
>>> is what gives Julia much of its power. Hopefully, much of the performance
>>> difference between hacked-together-rubbish code and well-polished code will
>>> be eradicated by version 1.0, and we'll see how popular Julia becomes then.
>>>
>>> Currently, though, the success of any general programming language or
>>> tool seems to hinge mostly on what you can build with it (Objective C is
>>> ugly as hell
>>>
>>
>> yes.. You can actually call Objective-C (OS X) from Julia.. I think
>> Android support should come first, then iOS support. Neither should be too
>> hard, possibly Apple would object..
>>
>> --
>> Palli.
>>
>> ​
>


Re: [julia-users] Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Alan Crawford
Thanks. The example was intended as an illustration of a more complex code in 
which this was an issue. However, maybe i could do it without the @generated.  
I will revisit it and see if I really did need it...

Thanks, Alan


was just an illustration of a the larger piece of code where I am making using 
On 29 Sep 2015, at 13:48, Mauro  wrote:

> I don't think that you need nor that you should use generated functions.
> But maybe I'm wrong, what are you trying to achieve?  This should work
> as you want:
> 
> function testfun2!{N}(X,Y::NTuple{N,Float64})
>for i in eachindex(X), j in 1:N # much better to have the loop this way
>X[i][j] = Y[j]
>end
>return X
> end
> 
> # Setup for function call
> InnerArrayPts = 3
> OuterArrayPts = 10
> Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
> Yinput = rand(InnerArrayPts)
> 
> testfun2!(Xinput,tuple(Yinput...))
> 
> On Tue, 2015-09-29 at 13:20, Alan Crawford  wrote:
>> I would like to preallocate memory of an array of arrays and pass it to a
>> function to be filled in. I have created an example below that illustrates
>> my question(s).
>> 
>> Based on my (probably incorrect) understanding that it would be desirable
>> to fix the type in my function, I would like to be able to pass my array of
>> arrays, X, in a type stable way.  However, I can't seem to pass
>> Array{Array{Float64,N},1}. If, however, i do not attempt to impose the type
>> on the function, it works.
>> 
>> Is there a way to pass Array{Array{Float64,N},1 to my function? Do I even
>> need to fix the type in the function to get good performance?
>> 
>> # version of Julia: 0.4.0-rc3
>> 
>> @generated function
>> testfun1!{N}(X::Array{Array{Float64,1},1},Y::NTuple{N,Float64})
>> quote
>> for j in 1:$N, i in eachindex(X)
>> X[i][j] = Y[j]
>> end
>> return X
>> end
>> end
>> 
>> @generated function testfun2!{N}(X,Y::NTuple{N,Float64})
>> quote
>> for j in 1:$N, i in eachindex(X)
>> X[i][j] = Y[j]
>> end
>> return X
>> end
>> end
>> 
>> # Setup for function call
>> InnerArrayPts = 3
>> OuterArrayPts = 10
>> Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
>> Yinput = rand(InnerArrayPts)
>> 
>> # Method Error Problem
>> testfun1!(Xinput,tuple(Yinput...))
>> 
>> # This works
>> testfun2!(Xinput,tuple(Yinput...))
>> 
>> I also tried with the following version of testfun1!() and again got a
>> method error.
>> 
>> @generated function
>> testfun1!{N}(X::Array{Array{Float64,N},1},Y::NTuple{N,Float64})
>> quote
>> for j in 1:$N, i in eachindex(X)
>> X[i][j] = Y[j]
>> end
>> return X
>> end
>> end
>> 
>> 
>> I am sure i am misunderstanding something quite fundamental and/or missing
>> something straightforward...
>> 
>> Thanks,
>> Alan



Re: [julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Tom Breloff
Leave it to Scott to make things more complicated... :)

On Tue, Sep 29, 2015 at 9:47 AM, Scott Jones 
wrote:

> Yes, one of the things that won me over was that I could write many fewer
> lines of code that I'd have to do in C/C++/Java (and that was when I was
> still a total novice - which I still am in many ways, but getting better).
> Python has been successful IMO because it's easy to quickly get stuff up
> and running - not like C/C++/Java, and Julia can be the same.
>
> I like Tom's idea - I'd maybe rank things based on: getting correct
> results (most important), time to do the programming, lines of code (that
> can have a big effect on maintainability) [might have to be some rules
> though, so people don't put everything on one line, maybe count statements,
> not lines]), and execution time.
>
> Code that doesn't get 100% correct results would not even be considered,
> and the other 3 metrics would be weighted, to determine the winner(s).
>
> On Tuesday, September 29, 2015 at 9:18:15 AM UTC-4, Tom Breloff wrote:
>>
>> One of the recent "why is my code slow" posts from a new user had a
>> statement that was something along the lines of "sure it's still 4-5 times
>> faster than python, but I expected it to be much faster".  I think this
>> sums it up... new users hear of Julia's blazing speed and expect it to be
>> as fast as highly optimized C/Fortran with absolutely zero effort.
>>
>> I think a great sales pitch for Julia could come in the form of a
>> contest... pick several programming problems, and choose your language.
>> For each problem, you get 1 point for finishing it first (correctly), and 1
>> point for the fastest execution time.  What language would you choose?
>>
>> On Tue, Sep 29, 2015 at 8:56 AM, Tomas Lycken 
>> wrote:
>>
>>> I think, this may be a little unfair to Julia.
>>>
>>> I agree that it’s unfair - but new users are seldom fair. What I meant
>>> to get at was not that Julia code by an inexperienced programmer is worse
>>> than anything else, but just that since Julia *can* be so fast, I think
>>> there’s a big risk that the gap between potential and actual throughput in
>>> many “first real Julia program”s becomes an annoyance, even if Julia has a
>>> the potential to reach further than Python ever could. In other words, it’s
>>> not Julia’s actual performance that is the problem, but the way it differs
>>> from expectations.
>>>
>>> // T
>>>
>>> On Tuesday, September 29, 2015 at 2:50:14 PM UTC+2, Páll Haraldsson
>>> wrote:
>>>
>>> On Tuesday, September 29, 2015 at 10:20:18 AM UTC, Tomas Lycken wrote:
>
> One thing Python does well, which Julia doesn't (yet) succeed in, is
> make it easy to start coding from zero experience and get something that
> executes "well enough"
>

 I think, this may be a little unfair to Julia. Python (and Perl)
 succeeded not because it is fast (only because it has fast development
 times - to begin with at least). Julia is expected to be fast, compared to
 C, Python isn't. Even if Julia were as slow as Python, it seems to be a
 better language - more maintainable, as more static and doesn't really have
 duck-typing (and Hoare's "billion dollar mistake"), right?

 (although, as always with first-time coders, code organization and
> readability might still leave some things to wish for...).
>
> In Julia, it's very much possible to get something to run, but the
> performance differences between well-written and not-so-well-written code
> are *huge*.
>

 Right, but even your code that is written to be slow should work (no
 segfaults..) and not be slower than Python? At least, those would be
 exceptional cases, all that I would like to know about as I see no good
 reason for it.

 The way I see it, yes, type-instability makes your code way slower, but
 isn't that (roughly) the same as in Python, where all code is
 "type-unstable", because of duck-typing? That is or was at least the
 default. Now PyPy and Numba etc. helps (that didn't exist in the
 beginning/10 years ago)

 This means that most users will show their code to someone who knows
> more than they do, and more likely than not get a first reaction along the
> lines of "everything you do is wrong". Although the statement isn't 
> untrue,
> it's very off-putting - and even more off-putting is the fact that there's
> a lot of "computer sciencey" stuff you need to understand to be able to
> grasp
>

 Only for fast code, not "correct" code. And 90% of code isn't
 speed-critical. To a beginner this seems very good. First you learn the
 basics. You can prototype and later optimize ("premature optimization is
 the root of all evil"?).

 *why* you did it wrong (type stability, difference between abstract and
> leaf types, difference between anonymous and named 

[julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Páll Haraldsson
On Tuesday, September 29, 2015 at 10:20:18 AM UTC, Tomas Lycken wrote:
>
> One thing Python does well, which Julia doesn't (yet) succeed in, is make 
> it easy to start coding from zero experience and get something that 
> executes "well enough"
>

I think, this may be a little unfair to Julia. Python (and Perl) succeeded 
not because it is fast (only because it has fast development times - to 
begin with at least). Julia is expected to be fast, compared to C, Python 
isn't. Even if Julia were as slow as Python, it seems to be a better 
language - more maintainable, as more static and doesn't really have 
duck-typing (and Hoare's "billion dollar mistake"), right?

(although, as always with first-time coders, code organization and 
> readability might still leave some things to wish for...).
>
> In Julia, it's very much possible to get something to run, but the 
> performance differences between well-written and not-so-well-written code 
> are *huge*.
>

Right, but even your code that is written to be slow should work (no 
segfaults..) and not be slower than Python? At least, those would be 
exceptional cases, all that I would like to know about as I see no good 
reason for it.

The way I see it, yes, type-instability makes your code way slower, but 
isn't that (roughly) the same as in Python, where all code is 
"type-unstable", because of duck-typing? That is or was at least the 
default. Now PyPy and Numba etc. helps (that didn't exist in the 
beginning/10 years ago)

This means that most users will show their code to someone who knows more 
> than they do, and more likely than not get a first reaction along the lines 
> of "everything you do is wrong". Although the statement isn't untrue, it's 
> very off-putting - and even more off-putting is the fact that there's a lot 
> of "computer sciencey" stuff you need to understand to be able to grasp
>

Only for fast code, not "correct" code. And 90% of code isn't 
speed-critical. To a beginner this seems very good. First you learn the 
basics. You can prototype and later optimize ("premature optimization is 
the root of all evil"?).

*why* you did it wrong (type stability, difference between abstract and 
> leaf types, difference between anonymous and named functions etc).
>
> Don't get me wrong - I think Julia is doing a lot of things right, and I'm 
> glad that these "CS-y" questions are asked and handled up-front: this is 
> what gives Julia much of its power. Hopefully, much of the performance 
> difference between hacked-together-rubbish code and well-polished code will 
> be eradicated by version 1.0, and we'll see how popular Julia becomes then.
>
> Currently, though, the success of any general programming language or tool 
> seems to hinge mostly on what you can build with it (Objective C is ugly as 
> hell
>

yes.. You can actually call Objective-C (OS X) from Julia.. I think Android 
support should come first, then iOS support. Neither should be too hard, 
possibly Apple would object..

-- 
Palli.



Re: [julia-users] Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Mauro
> Thanks. The example was intended as an illustration of a more complex
> code in which this was an issue. However, maybe i could do it without
> the @generated.  I will revisit it and see if I really did need it...

Well, it is always hard to tell what's best.  If you need certainty,
then you need to benchmark.  Note that in below example, Julia will
compile a specialized version of the function for each NTuple anyway.
This may be good or bad, depending on how much time compilation vs
running takes.

A case for a generated function would be if, say, the number of nested
loops would change depending on some parameter, say the dimensionality
of an array.

> Thanks, Alan
>
>
> was just an illustration of a the larger piece of code where I am making using
> On 29 Sep 2015, at 13:48, Mauro  wrote:
>
>> I don't think that you need nor that you should use generated functions.
>> But maybe I'm wrong, what are you trying to achieve?  This should work
>> as you want:
>>
>> function testfun2!{N}(X,Y::NTuple{N,Float64})
>>for i in eachindex(X), j in 1:N # much better to have the loop this way
>>X[i][j] = Y[j]
>>end
>>return X
>> end
>>
>> # Setup for function call
>> InnerArrayPts = 3
>> OuterArrayPts = 10
>> Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
>> Yinput = rand(InnerArrayPts)
>>
>> testfun2!(Xinput,tuple(Yinput...))
>>
>> On Tue, 2015-09-29 at 13:20, Alan Crawford  wrote:
>>> I would like to preallocate memory of an array of arrays and pass it to a
>>> function to be filled in. I have created an example below that illustrates
>>> my question(s).
>>>
>>> Based on my (probably incorrect) understanding that it would be desirable
>>> to fix the type in my function, I would like to be able to pass my array of
>>> arrays, X, in a type stable way.  However, I can't seem to pass
>>> Array{Array{Float64,N},1}. If, however, i do not attempt to impose the type
>>> on the function, it works.
>>>
>>> Is there a way to pass Array{Array{Float64,N},1 to my function? Do I even
>>> need to fix the type in the function to get good performance?
>>>
>>> # version of Julia: 0.4.0-rc3
>>>
>>> @generated function
>>> testfun1!{N}(X::Array{Array{Float64,1},1},Y::NTuple{N,Float64})
>>> quote
>>> for j in 1:$N, i in eachindex(X)
>>> X[i][j] = Y[j]
>>> end
>>> return X
>>> end
>>> end
>>>
>>> @generated function testfun2!{N}(X,Y::NTuple{N,Float64})
>>> quote
>>> for j in 1:$N, i in eachindex(X)
>>> X[i][j] = Y[j]
>>> end
>>> return X
>>> end
>>> end
>>>
>>> # Setup for function call
>>> InnerArrayPts = 3
>>> OuterArrayPts = 10
>>> Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
>>> Yinput = rand(InnerArrayPts)
>>>
>>> # Method Error Problem
>>> testfun1!(Xinput,tuple(Yinput...))
>>>
>>> # This works
>>> testfun2!(Xinput,tuple(Yinput...))
>>>
>>> I also tried with the following version of testfun1!() and again got a
>>> method error.
>>>
>>> @generated function
>>> testfun1!{N}(X::Array{Array{Float64,N},1},Y::NTuple{N,Float64})
>>> quote
>>> for j in 1:$N, i in eachindex(X)
>>> X[i][j] = Y[j]
>>> end
>>> return X
>>> end
>>> end
>>>
>>>
>>> I am sure i am misunderstanding something quite fundamental and/or missing
>>> something straightforward...
>>>
>>> Thanks,
>>> Alan


[julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Tomas Lycken


I think, this may be a little unfair to Julia.

I agree that it’s unfair - but new users are seldom fair. What I meant to 
get at was not that Julia code by an inexperienced programmer is worse than 
anything else, but just that since Julia *can* be so fast, I think there’s 
a big risk that the gap between potential and actual throughput in many 
“first real Julia program”s becomes an annoyance, even if Julia has a the 
potential to reach further than Python ever could. In other words, it’s not 
Julia’s actual performance that is the problem, but the way it differs from 
expectations.

// T

On Tuesday, September 29, 2015 at 2:50:14 PM UTC+2, Páll Haraldsson wrote:

On Tuesday, September 29, 2015 at 10:20:18 AM UTC, Tomas Lycken wrote:
>>
>> One thing Python does well, which Julia doesn't (yet) succeed in, is make 
>> it easy to start coding from zero experience and get something that 
>> executes "well enough"
>>
>
> I think, this may be a little unfair to Julia. Python (and Perl) succeeded 
> not because it is fast (only because it has fast development times - to 
> begin with at least). Julia is expected to be fast, compared to C, Python 
> isn't. Even if Julia were as slow as Python, it seems to be a better 
> language - more maintainable, as more static and doesn't really have 
> duck-typing (and Hoare's "billion dollar mistake"), right?
>
> (although, as always with first-time coders, code organization and 
>> readability might still leave some things to wish for...).
>>
>> In Julia, it's very much possible to get something to run, but the 
>> performance differences between well-written and not-so-well-written code 
>> are *huge*.
>>
>
> Right, but even your code that is written to be slow should work (no 
> segfaults..) and not be slower than Python? At least, those would be 
> exceptional cases, all that I would like to know about as I see no good 
> reason for it.
>
> The way I see it, yes, type-instability makes your code way slower, but 
> isn't that (roughly) the same as in Python, where all code is 
> "type-unstable", because of duck-typing? That is or was at least the 
> default. Now PyPy and Numba etc. helps (that didn't exist in the 
> beginning/10 years ago)
>
> This means that most users will show their code to someone who knows more 
>> than they do, and more likely than not get a first reaction along the lines 
>> of "everything you do is wrong". Although the statement isn't untrue, it's 
>> very off-putting - and even more off-putting is the fact that there's a lot 
>> of "computer sciencey" stuff you need to understand to be able to grasp
>>
>
> Only for fast code, not "correct" code. And 90% of code isn't 
> speed-critical. To a beginner this seems very good. First you learn the 
> basics. You can prototype and later optimize ("premature optimization is 
> the root of all evil"?).
>
> *why* you did it wrong (type stability, difference between abstract and 
>> leaf types, difference between anonymous and named functions etc).
>>
>> Don't get me wrong - I think Julia is doing a lot of things right, and 
>> I'm glad that these "CS-y" questions are asked and handled up-front: this 
>> is what gives Julia much of its power. Hopefully, much of the performance 
>> difference between hacked-together-rubbish code and well-polished code will 
>> be eradicated by version 1.0, and we'll see how popular Julia becomes then.
>>
>> Currently, though, the success of any general programming language or 
>> tool seems to hinge mostly on what you can build with it (Objective C is 
>> ugly as hell
>>
>
> yes.. You can actually call Objective-C (OS X) from Julia.. I think 
> Android support should come first, then iOS support. Neither should be too 
> hard, possibly Apple would object..
>
> -- 
> Palli.
>
> ​


[julia-users] [Doc, IJulia, rc3] `cumsum` documentation problem?

2015-09-29 Thread Sisyphuss


Cumulative sum along a dimension ``dim`` (defaults to 1).
See also :func:`cumsum!` to use a preallocated output array,
both for performance and to control the precision of the
output (e.g. to avoid overflow).

cumsum cumsum! cumsum_kbn




[julia-users] Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Alan Crawford


I would like to preallocate memory of an array of arrays and pass it to a 
function to be filled in. I have created an example below that illustrates 
my question(s).

Based on my (probably incorrect) understanding that it would be desirable 
to fix the type in my function, I would like to be able to pass my array of 
arrays, X, in a type stable way.  However, I can't seem to pass 
Array{Array{Float64,N},1}. If, however, i do not attempt to impose the type 
on the function, it works. 

Is there a way to pass Array{Array{Float64,N},1 to my function? Do I even 
need to fix the type in the function to get good performance?

# version of Julia: 0.4.0-rc3

@generated function 
testfun1!{N}(X::Array{Array{Float64,1},1},Y::NTuple{N,Float64})
quote
for j in 1:$N, i in eachindex(X)
X[i][j] = Y[j]
end
return X
end
end

@generated function testfun2!{N}(X,Y::NTuple{N,Float64})
quote
for j in 1:$N, i in eachindex(X)
X[i][j] = Y[j]
end
return X
end
end

# Setup for function call
InnerArrayPts = 3
OuterArrayPts = 10
Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
Yinput = rand(InnerArrayPts)

# Method Error Problem
testfun1!(Xinput,tuple(Yinput...))

# This works
testfun2!(Xinput,tuple(Yinput...))

I also tried with the following version of testfun1!() and again got a 
method error.

@generated function 
testfun1!{N}(X::Array{Array{Float64,N},1},Y::NTuple{N,Float64})
quote
for j in 1:$N, i in eachindex(X)
X[i][j] = Y[j]
end
return X
end
end


I am sure i am misunderstanding something quite fundamental and/or missing 
something straightforward...

Thanks,
Alan





[julia-users] Re: Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Tomas Lycken


If you’re trying this out in the REPL, you might have stumbled on the fact 
that list comprehensions currently aren’t type stable, unless you tell 
Julia the type of the elements. So instead of Xinput = 
[Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts] in your setup, do 
Xinput 
= Vector{Float64}[Array(Float64, InnerArrayPts) for r in 1:OuterArrayPts] 
(I also switched which array constructor you use, but that shouldn’t matter 
- it’s just a question of style, which is subjective :) ).

Note that Vector{Float64} is just a type alias for Array{Float64, 1}.

// T

On Tuesday, September 29, 2015 at 1:20:02 PM UTC+2, Alan Crawford wrote:


>
> I would like to preallocate memory of an array of arrays and pass it to a 
> function to be filled in. I have created an example below that illustrates 
> my question(s).
>
> Based on my (probably incorrect) understanding that it would be desirable 
> to fix the type in my function, I would like to be able to pass my array of 
> arrays, X, in a type stable way.  However, I can't seem to pass 
> Array{Array{Float64,N},1}. If, however, i do not attempt to impose the type 
> on the function, it works. 
>
> Is there a way to pass Array{Array{Float64,N},1 to my function? Do I even 
> need to fix the type in the function to get good performance?
>
> # version of Julia: 0.4.0-rc3
>
> @generated function 
> testfun1!{N}(X::Array{Array{Float64,1},1},Y::NTuple{N,Float64})
> quote
> for j in 1:$N, i in eachindex(X)
> X[i][j] = Y[j]
> end
> return X
> end
> end
>
> @generated function testfun2!{N}(X,Y::NTuple{N,Float64})
> quote
> for j in 1:$N, i in eachindex(X)
> X[i][j] = Y[j]
> end
> return X
> end
> end
>
> # Setup for function call
> InnerArrayPts = 3
> OuterArrayPts = 10
> Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
> Yinput = rand(InnerArrayPts)
>
> # Method Error Problem
> testfun1!(Xinput,tuple(Yinput...))
>
> # This works
> testfun2!(Xinput,tuple(Yinput...))
>
> I also tried with the following version of testfun1!() and again got a 
> method error.
>
> @generated function 
> testfun1!{N}(X::Array{Array{Float64,N},1},Y::NTuple{N,Float64})
> quote
> for j in 1:$N, i in eachindex(X)
> X[i][j] = Y[j]
> end
> return X
> end
> end
>
>
> I am sure i am misunderstanding something quite fundamental and/or missing 
> something straightforward...
>
> Thanks,
> Alan
>
>
>
> ​


[julia-users] Re: AssertionError: Base.Test.length(#3232#inftypes) Base.Test.== 1

2015-09-29 Thread Tomas Lycken


Hah, it helps to ask for help. Then you find the solution yourself!

For completeness, here’s the entire erroring file:

module ConstantTests

using Interpolations, Base.Test

A = rand(Float64, 10) * 100

for (constructor, copier) in ((interpolate, x->x), (interpolate!, copy)) # this 
corresponds to line 12
itp1c = @inferred(constructor(copier(A), BSpline(Constant()), OnCell()))

# rest of the loop body commented out
end

end

The problem was that I had updated the signature of interpolate, but not of 
interpolate!. I don’t understand why I wasn’t given a no method error, 
though…

// T

On Tuesday, September 29, 2015 at 1:31:16 PM UTC+2, Tomas Lycken wrote:

I'm introducing some breaking changes in a package, and I want to do it 
> little-by-little and running the test-suite in parts in between to make 
> sure I don't break more than I intend to. After changing a couple of 
> constructors (but probably not all that need it) from taking `Type{T}` 
> arguments to taking `T` arguments, I am now seeing the following error:
>
> ```
> julia> workspace(); include("constant.jl")
> ERROR: LoadError: AssertionError: Base.Test.length(#3232#inftypes) 
> Base.Test.== 1
>  [inlined code] from test.jl:161
>  in anonymous at no file:160
>  in include at boot.jl:261
>  in include_from_node1 at loading.jl:304
> while loading C:\Users\Tomas 
> Lycken\.julia\v0.4\Interpolations\test\b-splines\constant.jl, in expression 
> starting on lin
> e 12
> ```
>
> Line 12, referred to on the last line, is the calling expression in my 
> test file, so that doesn't say much about what went wrong.
>
> Any hints on how I can narrow the cause of this down?
>
> // T
>
​


[julia-users] AssertionError: Base.Test.length(#3232#inftypes) Base.Test.== 1

2015-09-29 Thread Tomas Lycken
I'm introducing some breaking changes in a package, and I want to do it 
little-by-little and running the test-suite in parts in between to make 
sure I don't break more than I intend to. After changing a couple of 
constructors (but probably not all that need it) from taking `Type{T}` 
arguments to taking `T` arguments, I am now seeing the following error:

```
julia> workspace(); include("constant.jl")
ERROR: LoadError: AssertionError: Base.Test.length(#3232#inftypes) 
Base.Test.== 1
 [inlined code] from test.jl:161
 in anonymous at no file:160
 in include at boot.jl:261
 in include_from_node1 at loading.jl:304
while loading C:\Users\Tomas 
Lycken\.julia\v0.4\Interpolations\test\b-splines\constant.jl, in expression 
starting on lin
e 12
```

Line 12, referred to on the last line, is the calling expression in my test 
file, so that doesn't say much about what went wrong.

Any hints on how I can narrow the cause of this down?

// T


Re: [julia-users] Escher image

2015-09-29 Thread Shashi Gowda
You can put the image in the assets/ directory under the directory you
started the server in and then show it with image("assets/img.jpg")

On Tue, Sep 29, 2015, 11:50 AM Yakir Gagnon <12.ya...@gmail.com> wrote:

> How do you show a local image (say it’s in the directory where you ran escher
> --serve)?
> ​
>


Re: [julia-users] Re: Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Alan Crawford
Many thanks!
alan
On 29 Sep 2015, at 13:04, Tomas Lycken  wrote:

> makes a difference in this specific case is a little beyond me, but an 
> educated guess says that with that guarantee, it is possible to know what 
> type Array(Float64, InnerArrayPts) will return, making it possible to also 
> type the comprehension tightly



[julia-users] Re: Passing data through Optim

2015-09-29 Thread Tomas Lycken


Kristoffer, your example still uses a *non-const* global. Using this 
benchmark , I get the 
following results:

Non-const global
  0.017629 seconds (999.84 k allocations: 15.256 MB)
Const global
  0.02 seconds (6 allocations: 192 bytes)
Closure
  0.08 seconds (19 allocations: 928 bytes)

So yes, a closure is much better than using a non-const global, but using a 
const global still beats it. However, both are fast enough that it probably 
won’t be the bottleneck in your application anymore :)

Although I can by no means read and understand it, it is also instructive 
to look at the dumps from @code_llvm to get a picture of the difference in 
code complexity:

Const global:

julia> @code_llvm f_const_glob(10^5)

define i64 @julia_f_const_glob_2617(i64) {
top:
  %1 = icmp slt i64 %0, 1
  br i1 %1, label %L3, label %L.preheader

L.preheader:  ; preds = %top
  %2 = icmp sgt i64 %0, 0
  %.op = mul i64 %0, 3
  %3 = select i1 %2, i64 %.op, i64 0
  br label %L3

L3:   ; preds = %L.preheader, %top
  %s.1 = phi i64 [ 0, %top ], [ %3, %L.preheader ]
  ret i64 %s.1
}

Closure:

julia> @code_llvm f_clos(10^5, 3)

define %jl_value_t* @julia_f_clos_2616(i64, i64) {
top:
  %2 = alloca [5 x %jl_value_t*], align 8
  %.sub = getelementptr inbounds [5 x %jl_value_t*]* %2, i64 0, i64 0
  %3 = getelementptr [5 x %jl_value_t*]* %2, i64 0, i64 2
  %4 = getelementptr [5 x %jl_value_t*]* %2, i64 0, i64 3
  store %jl_value_t* inttoptr (i64 6 to %jl_value_t*), %jl_value_t** %.sub, 
align 8
  %5 = getelementptr [5 x %jl_value_t*]* %2, i64 0, i64 1
  %6 = load %jl_value_t*** @jl_pgcstack, align 8
  %.c = bitcast %jl_value_t** %6 to %jl_value_t*
  store %jl_value_t* %.c, %jl_value_t** %5, align 8
  store %jl_value_t** %.sub, %jl_value_t*** @jl_pgcstack, align 8
  store %jl_value_t* null, %jl_value_t** %3, align 8
  store %jl_value_t* null, %jl_value_t** %4, align 8
  %7 = getelementptr [5 x %jl_value_t*]* %2, i64 0, i64 4
  store %jl_value_t* null, %jl_value_t** %7, align 8
  %8 = load %jl_value_t** inttoptr (i64 2147534600 to %jl_value_t**), align 8
  store %jl_value_t* %8, %jl_value_t** %4, align 8
  %9 = load %jl_value_t** inttoptr (i64 2164502072 to %jl_value_t**), align 8
  store %jl_value_t* %9, %jl_value_t** %7, align 8
  %10 = call %jl_value_t* @jl_f_instantiate_type(%jl_value_t* null, 
%jl_value_t** %4, i32 2)
  store %jl_value_t* %10, %jl_value_t** %4, align 8
  %11 = call %jl_value_t* @jl_f_svec(%jl_value_t* null, %jl_value_t** null, i32 
0)
  store %jl_value_t* %11, %jl_value_t** %7, align 8
  %12 = call %jl_value_t* @jl_f_svec(%jl_value_t* null, %jl_value_t** %4, i32 2)
  store %jl_value_t* %12, %jl_value_t** %4, align 8
  %13 = call %jl_value_t* @jl_box_int64(i64 signext %1)
  store %jl_value_t* %13, %jl_value_t** %7, align 8
  %14 = call %jl_value_t* (i64, ...)* @jl_svec(i64 1, %jl_value_t* %13)
  store %jl_value_t* %14, %jl_value_t** %7, align 8
  %15 = call %jl_value_t* @jl_new_closure(i8* null, %jl_value_t* %14, 
%jl_value_t* inttoptr (i64 2221717744 to %jl_value
_t*))
  store %jl_value_t* %15, %jl_value_t** %7, align 8
  %16 = load %jl_value_t** @jl_false, align 8
  %17 = call %jl_value_t* @jl_method_def(%jl_value_t* inttoptr (i64 507171808 
to %jl_value_t*), %jl_value_t** %3, %jl_va
lue_t* null, %jl_value_t* null, %jl_value_t* %12, %jl_value_t* %15, 
%jl_value_t* %16, %jl_value_t* inttoptr (i64 2179532
144 to %jl_value_t*), i32 0)
  %18 = load %jl_value_t** %3, align 8
  %19 = bitcast %jl_value_t* %18 to %jl_value_t* (%jl_value_t*, %jl_value_t**, 
i32)**
  %20 = load %jl_value_t* (%jl_value_t*, %jl_value_t**, i32)** %19, align 8
  %21 = call %jl_value_t* @jl_box_int64(i64 signext %0)
  store %jl_value_t* %21, %jl_value_t** %4, align 8
  %22 = call %jl_value_t* %20(%jl_value_t* %18, %jl_value_t** %4, i32 1)
  %23 = load %jl_value_t** %5, align 8
  %24 = getelementptr inbounds %jl_value_t* %23, i64 0, i32 0
  store %jl_value_t** %24, %jl_value_t*** @jl_pgcstack, align 8
  ret %jl_value_t* %22
}

On Tuesday, September 29, 2015 at 9:18:29 AM UTC+2, Kristoffer Carlsson 
wrote:

In my experience a closure is much faster than a function that accesses 
> global variables even if it is passed to another function as an argument.
>
> Two different examples. Here we pass a function that accesses (non const) 
> globals to another function:
>
> a = 3
>
> g(f::Function, b, N) = f(b, N)
>
> function f_glob(b, N)
> s = 0
> for i = 1:N
> s += a
> end
> return s
> end
>
> @time g(f_glob, 3, 10^6)
>
> 300
>
> 0.020655 seconds (999.84 k allocations: 15.256 MB, 18.40% gc time)
>
>
> Here we instead use an outer function that generates the desired closure 
> and pass the closure as the function argument:
>
> function f_clos(b, N, a)
> function f_inner(b, N)
> s = 0
> for i = 1:N
> s += a
> end
> return s
> end
> 

[julia-users] Re: Passing data through Optim

2015-09-29 Thread Andras Niedermayer
If you're using Julia 0.4 you can also use call overloading, which is 
almost as convenient as closures and as fast as const globals. An extension 
of Tomas's benchmark 
 gives me this:

Non-const global
  0.027968 seconds (999.84 k allocations: 15.256 MB)
Const global
  0.03 seconds (6 allocations: 192 bytes)
Closure
  0.15 seconds (19 allocations: 928 bytes)
Call Overloading
  0.03 seconds (6 allocations: 192 bytes)

I'm not sure whether the current version of Optim.jl and other libraries 
already support callbacks with call overloading.

On Tuesday, September 29, 2015 at 11:11:42 AM UTC+2, Tomas Lycken wrote:
>
> Kristoffer, your example still uses a *non-const* global. Using this 
> benchmark , I get 
> the following results:
>
> Non-const global
>   0.017629 seconds (999.84 k allocations: 15.256 MB)
> Const global
>   0.02 seconds (6 allocations: 192 bytes)
> Closure
>   0.08 seconds (19 allocations: 928 bytes)
>
> So yes, a closure is much better than using a non-const global, but using 
> a const global still beats it. However, both are fast enough that it 
> probably won’t be the bottleneck in your application anymore :)
>
> Although I can by no means read and understand it, it is also instructive 
> to look at the dumps from @code_llvm to get a picture of the difference 
> in code complexity:
>
> Const global:
>
> julia> @code_llvm f_const_glob(10^5)
>
> define i64 @julia_f_const_glob_2617(i64) {
> top:
>   %1 = icmp slt i64 %0, 1
>   br i1 %1, label %L3, label %L.preheader
>
> L.preheader:  ; preds = %top
>   %2 = icmp sgt i64 %0, 0
>   %.op = mul i64 %0, 3
>   %3 = select i1 %2, i64 %.op, i64 0
>   br label %L3
>
> L3:   ; preds = %L.preheader, %top
>   %s.1 = phi i64 [ 0, %top ], [ %3, %L.preheader ]
>   ret i64 %s.1
> }
>
> Closure:
>
> julia> @code_llvm f_clos(10^5, 3)
>
> define %jl_value_t* @julia_f_clos_2616(i64, i64) {
> top:
>   %2 = alloca [5 x %jl_value_t*], align 8
>   %.sub = getelementptr inbounds [5 x %jl_value_t*]* %2, i64 0, i64 0
>   %3 = getelementptr [5 x %jl_value_t*]* %2, i64 0, i64 2
>   %4 = getelementptr [5 x %jl_value_t*]* %2, i64 0, i64 3
>   store %jl_value_t* inttoptr (i64 6 to %jl_value_t*), %jl_value_t** %.sub, 
> align 8
>   %5 = getelementptr [5 x %jl_value_t*]* %2, i64 0, i64 1
>   %6 = load %jl_value_t*** @jl_pgcstack, align 8
>   %.c = bitcast %jl_value_t** %6 to %jl_value_t*
>   store %jl_value_t* %.c, %jl_value_t** %5, align 8
>   store %jl_value_t** %.sub, %jl_value_t*** @jl_pgcstack, align 8
>   store %jl_value_t* null, %jl_value_t** %3, align 8
>   store %jl_value_t* null, %jl_value_t** %4, align 8
>   %7 = getelementptr [5 x %jl_value_t*]* %2, i64 0, i64 4
>   store %jl_value_t* null, %jl_value_t** %7, align 8
>   %8 = load %jl_value_t** inttoptr (i64 2147534600 to %jl_value_t**), align 8
>   store %jl_value_t* %8, %jl_value_t** %4, align 8
>   %9 = load %jl_value_t** inttoptr (i64 2164502072 to %jl_value_t**), align 8
>   store %jl_value_t* %9, %jl_value_t** %7, align 8
>   %10 = call %jl_value_t* @jl_f_instantiate_type(%jl_value_t* null, 
> %jl_value_t** %4, i32 2)
>   store %jl_value_t* %10, %jl_value_t** %4, align 8
>   %11 = call %jl_value_t* @jl_f_svec(%jl_value_t* null, %jl_value_t** null, 
> i32 0)
>   store %jl_value_t* %11, %jl_value_t** %7, align 8
>   %12 = call %jl_value_t* @jl_f_svec(%jl_value_t* null, %jl_value_t** %4, i32 
> 2)
>   store %jl_value_t* %12, %jl_value_t** %4, align 8
>   %13 = call %jl_value_t* @jl_box_int64(i64 signext %1)
>   store %jl_value_t* %13, %jl_value_t** %7, align 8
>   %14 = call %jl_value_t* (i64, ...)* @jl_svec(i64 1, %jl_value_t* %13)
>   store %jl_value_t* %14, %jl_value_t** %7, align 8
>   %15 = call %jl_value_t* @jl_new_closure(i8* null, %jl_value_t* %14, 
> %jl_value_t* inttoptr (i64 2221717744 to %jl_value
> _t*))
>   store %jl_value_t* %15, %jl_value_t** %7, align 8
>   %16 = load %jl_value_t** @jl_false, align 8
>   %17 = call %jl_value_t* @jl_method_def(%jl_value_t* inttoptr (i64 507171808 
> to %jl_value_t*), %jl_value_t** %3, %jl_va
> lue_t* null, %jl_value_t* null, %jl_value_t* %12, %jl_value_t* %15, 
> %jl_value_t* %16, %jl_value_t* inttoptr (i64 2179532
> 144 to %jl_value_t*), i32 0)
>   %18 = load %jl_value_t** %3, align 8
>   %19 = bitcast %jl_value_t* %18 to %jl_value_t* (%jl_value_t*, 
> %jl_value_t**, i32)**
>   %20 = load %jl_value_t* (%jl_value_t*, %jl_value_t**, i32)** %19, align 8
>   %21 = call %jl_value_t* @jl_box_int64(i64 signext %0)
>   store %jl_value_t* %21, %jl_value_t** %4, align 8
>   %22 = call %jl_value_t* %20(%jl_value_t* %18, %jl_value_t** %4, i32 1)
>   %23 = load %jl_value_t** %5, align 8
>   %24 = getelementptr inbounds %jl_value_t* %23, i64 0, i32 0
>   store %jl_value_t** %24, %jl_value_t*** @jl_pgcstack, align 8
>   ret %jl_value_t* %22
> }
>
> On Tuesday, 

[julia-users] Re: Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Tomas Lycken


Type-inference in the REPL usually doesn’t give you as tight types as you’d 
like, mainly because it has to give room for what you *might* do in the 
future. Declaring variables const in the REPL (i.e. in global scope) helps 
with that, since it gives type-inference some guarantees that the variable 
won’t change its type.

Exactly why that makes a difference in this specific case is a little 
beyond me, but an educated guess says that with that guarantee, it is 
possible to know what type Array(Float64, InnerArrayPts) will return, 
making it possible to also type the comprehension tightly.

// T

On Tuesday, September 29, 2015 at 1:58:24 PM UTC+2, Alan Crawford wrote:

Thanks Tomas, works perfectly.
>
> I was testing out some code in the REPL... Relatedly and without abusing 
> the thread too much, I wondered if you might be able to help me understand 
> why the setting InnerArrayPts as const created the desired type stable 
> array comprehension? Namely,
>
> const InnerArrayPts1 = 3 
> OuterArrayPts = 10
> Xinput1 = [Array(Float64,InnerArrayPts1) for r in 1:OuterArrayPts]
>
> Cheers
> Alan
>
>
>
> On Tuesday, 29 September 2015 12:28:36 UTC+1, Tomas Lycken wrote:
>>
>> If you’re trying this out in the REPL, you might have stumbled on the 
>> fact that list comprehensions currently aren’t type stable, unless you tell 
>> Julia the type of the elements. So instead of Xinput = 
>> [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts] in your setup, 
>> do Xinput = Vector{Float64}[Array(Float64, InnerArrayPts) for r in 
>> 1:OuterArrayPts] (I also switched which array constructor you use, but 
>> that shouldn’t matter - it’s just a question of style, which is subjective 
>> :) ).
>>
>> Note that Vector{Float64} is just a type alias for Array{Float64, 1}.
>>
>> // T
>>
>> On Tuesday, September 29, 2015 at 1:20:02 PM UTC+2, Alan Crawford wrote:
>>
>>
>>>
>>> I would like to preallocate memory of an array of arrays and pass it to 
>>> a function to be filled in. I have created an example below that 
>>> illustrates my question(s).
>>>
>>> Based on my (probably incorrect) understanding that it would be 
>>> desirable to fix the type in my function, I would like to be able to pass 
>>> my array of arrays, X, in a type stable way.  However, I can't seem to pass 
>>> Array{Array{Float64,N},1}. If, however, i do not attempt to impose the type 
>>> on the function, it works. 
>>>
>>> Is there a way to pass Array{Array{Float64,N},1 to my function? Do I 
>>> even need to fix the type in the function to get good performance?
>>>
>>> # version of Julia: 0.4.0-rc3
>>>
>>> @generated function 
>>> testfun1!{N}(X::Array{Array{Float64,1},1},Y::NTuple{N,Float64})
>>> quote
>>> for j in 1:$N, i in eachindex(X)
>>> X[i][j] = Y[j]
>>> end
>>> return X
>>> end
>>> end
>>>
>>> @generated function testfun2!{N}(X,Y::NTuple{N,Float64})
>>> quote
>>> for j in 1:$N, i in eachindex(X)
>>> X[i][j] = Y[j]
>>> end
>>> return X
>>> end
>>> end
>>>
>>> # Setup for function call
>>> InnerArrayPts = 3
>>> OuterArrayPts = 10
>>> Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
>>> Yinput = rand(InnerArrayPts)
>>>
>>> # Method Error Problem
>>> testfun1!(Xinput,tuple(Yinput...))
>>>
>>> # This works
>>> testfun2!(Xinput,tuple(Yinput...))
>>>
>>> I also tried with the following version of testfun1!() and again got a 
>>> method error.
>>>
>>> @generated function 
>>> testfun1!{N}(X::Array{Array{Float64,N},1},Y::NTuple{N,Float64})
>>> quote
>>> for j in 1:$N, i in eachindex(X)
>>> X[i][j] = Y[j]
>>> end
>>> return X
>>> end
>>> end
>>>
>>>
>>> I am sure i am misunderstanding something quite fundamental and/or 
>>> missing something straightforward...
>>>
>>> Thanks,
>>> Alan
>>>
>>>
>>>
>>> ​
>>
> ​


[julia-users] Re: Julia release candidates downloading with Ubuntu PPA

2015-09-29 Thread Tommy Hofmann
According 
to https://groups.google.com/d/topic/julia-users/Cm5ToWMx3tw/discussion the 
answer is no.

On Tuesday, September 29, 2015 at 9:57:13 AM UTC+8, Scott Jones wrote:
>
> Because we needed to use some of the functionality which had been in 0.4, 
> we'd been using the nightly builds from 'ppa:staticfloat/julianightlies',
> but now we need to switch that to picking up the 0.4.0 release candidates 
> until 0.4 becomes the release (because some packages like PEGParser.jl
> are failing doing a Pkg.build, complaining about not working with 0.5).
>
> Is there any way of getting the 0.4 RC for Ubuntu?
>
> Thanks, Scott
>
>

[julia-users] What's the reason of the Success of Python?

2015-09-29 Thread Sisyphuss
While waiting Julia 0.4 stabilizing, let's do some brainstorming.

What's the reason of the Success of Python? 
If Julia had appeared 10 years earlier, will Python still have this success?




[julia-users] Re: Passing data through Optim

2015-09-29 Thread Christopher Fisher
Thank you everyone for your help and examples. Its been very instructive. I 
found that simply assigning SubData in SubLoop1 as a const speed up the 
code 82 fold. I will try to incorporate the other advice where ever 
possible. 


[julia-users] Re: Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Alan Crawford
Thanks Tomas, works perfectly.

I was testing out some code in the REPL... Relatedly and without abusing 
the thread too much, I wondered if you might be able to help me understand 
why the setting InnerArrayPts as const created the desired type stable 
array comprehension? Namely,

const InnerArrayPts1 = 3 
OuterArrayPts = 10
Xinput1 = [Array(Float64,InnerArrayPts1) for r in 1:OuterArrayPts]

Cheers
Alan



On Tuesday, 29 September 2015 12:28:36 UTC+1, Tomas Lycken wrote:
>
> If you’re trying this out in the REPL, you might have stumbled on the fact 
> that list comprehensions currently aren’t type stable, unless you tell 
> Julia the type of the elements. So instead of Xinput = 
> [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts] in your setup, 
> do Xinput = Vector{Float64}[Array(Float64, InnerArrayPts) for r in 
> 1:OuterArrayPts] (I also switched which array constructor you use, but 
> that shouldn’t matter - it’s just a question of style, which is subjective 
> :) ).
>
> Note that Vector{Float64} is just a type alias for Array{Float64, 1}.
>
> // T
>
> On Tuesday, September 29, 2015 at 1:20:02 PM UTC+2, Alan Crawford wrote:
>
>
>>
>> I would like to preallocate memory of an array of arrays and pass it to a 
>> function to be filled in. I have created an example below that illustrates 
>> my question(s).
>>
>> Based on my (probably incorrect) understanding that it would be desirable 
>> to fix the type in my function, I would like to be able to pass my array of 
>> arrays, X, in a type stable way.  However, I can't seem to pass 
>> Array{Array{Float64,N},1}. If, however, i do not attempt to impose the type 
>> on the function, it works. 
>>
>> Is there a way to pass Array{Array{Float64,N},1 to my function? Do I 
>> even need to fix the type in the function to get good performance?
>>
>> # version of Julia: 0.4.0-rc3
>>
>> @generated function 
>> testfun1!{N}(X::Array{Array{Float64,1},1},Y::NTuple{N,Float64})
>> quote
>> for j in 1:$N, i in eachindex(X)
>> X[i][j] = Y[j]
>> end
>> return X
>> end
>> end
>>
>> @generated function testfun2!{N}(X,Y::NTuple{N,Float64})
>> quote
>> for j in 1:$N, i in eachindex(X)
>> X[i][j] = Y[j]
>> end
>> return X
>> end
>> end
>>
>> # Setup for function call
>> InnerArrayPts = 3
>> OuterArrayPts = 10
>> Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
>> Yinput = rand(InnerArrayPts)
>>
>> # Method Error Problem
>> testfun1!(Xinput,tuple(Yinput...))
>>
>> # This works
>> testfun2!(Xinput,tuple(Yinput...))
>>
>> I also tried with the following version of testfun1!() and again got a 
>> method error.
>>
>> @generated function 
>> testfun1!{N}(X::Array{Array{Float64,N},1},Y::NTuple{N,Float64})
>> quote
>> for j in 1:$N, i in eachindex(X)
>> X[i][j] = Y[j]
>> end
>> return X
>> end
>> end
>>
>>
>> I am sure i am misunderstanding something quite fundamental and/or 
>> missing something straightforward...
>>
>> Thanks,
>> Alan
>>
>>
>>
>> ​
>


[julia-users] Displaying images in Jupyter notebook

2015-09-29 Thread cormullion
I installed Jupyter and opened a new notebook. It works fine (Jupyter 
4.0.6, Julia 0.4.0-rc2). Now I want to start using Images.jl. So:

using Images
img = imread("/tmp/simple.png")

But I get this response:

UnableToOpenConfigureFile `coder.xml' @ 
warning/configure.c/GetConfigureOptions/706

 in error at 
/Applications/Julia-0.4.0.app/Contents/Resources/julia/lib/julia/sys.dylib
 in error at /Users/me/.julia/v0.4/Images/src/ioformats/libmagickwand.jl:146
 in setimageformat at 
/Users/me/.julia/v0.4/Images/src/ioformats/libmagickwand.jl:328
 in getblob at 
/Users/me/.julia/v0.4/Images/src/ioformats/libmagickwand.jl:208
 in writemime at /Users/me/.julia/v0.4/Images/src/io.jl:226
 in base64encode at base64.jl:160
 in display_dict at /Users/me/.julia/v0.4/IJulia/src/execute_request.jl:32


However, in a terminal these commands work fine:

julia> using Images

julia> img = imread("/tmp/simple.png")
RGB4 Images.Image with:
  data: 3000x2308 
Array{ColorTypes.RGB4{FixedPointNumbers.UfixedBase{UInt8,8}},2}
  properties:
imagedescription: 
spatialorder:  x y
pixelspacing:  1 1

Which suggests I have to configure something in Jupyter to connect 
something to something else, since ordinary Julia seems to be happy with 
the imagemagick stuff. 
I'd welcome some clues...

(I'm keen on trying Images.jl. But at the moment I can but glimpse its 
promised magnificence on the far horizon... :)
 


Re: [julia-users] Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Mauro
> Thanks Mauro - really useful to know. So it seems that in this
> particular instance the number of nested loops does change with N, so
> I think the code would fall into the latter case.

Sorry, I wasn't clear:  no, only the number of iterations would change.
With numbers of nested loops I meant, say two:

for i=1:n
   for j=1:m
   ...
   end
end

to three

for i=1:n
   for j=1:m
   for k=1:l
   ...
   end
   end
end

which could be done with a generated function (look for Base.Cartesian).

>> Well, it is always hard to tell what's best.  If you need certainty,
>> then you need to benchmark.  Note that in below example, Julia will
>> compile a specialized version of the function for each NTuple anyway.
>> This may be good or bad, depending on how much time compilation vs
>> running takes.

And above was also not clear, I should have written:

"Note that in MY example below, Julia will compile a specialized version
of the function for each NTuple with a different N anyway."

>> A case for a generated function would be if, say, the number of nested
>> loops would change depending on some parameter, say the dimensionality
>> of an array.
>>
>>> Thanks, Alan
>>>
>>>
>>> was just an illustration of a the larger piece of code where I am making 
>>> using
>>> On 29 Sep 2015, at 13:48, Mauro  wrote:
>>>
 I don't think that you need nor that you should use generated functions.
 But maybe I'm wrong, what are you trying to achieve?  This should work
 as you want:

 function testfun2!{N}(X,Y::NTuple{N,Float64})
   for i in eachindex(X), j in 1:N # much better to have the loop this way
   X[i][j] = Y[j]
   end
   return X
 end

 # Setup for function call
 InnerArrayPts = 3
 OuterArrayPts = 10
 Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
 Yinput = rand(InnerArrayPts)

 testfun2!(Xinput,tuple(Yinput...))

 On Tue, 2015-09-29 at 13:20, Alan Crawford  wrote:
> I would like to preallocate memory of an array of arrays and pass it to a
> function to be filled in. I have created an example below that illustrates
> my question(s).
>
> Based on my (probably incorrect) understanding that it would be desirable
> to fix the type in my function, I would like to be able to pass my array 
> of
> arrays, X, in a type stable way.  However, I can't seem to pass
> Array{Array{Float64,N},1}. If, however, i do not attempt to impose the 
> type
> on the function, it works.
>
> Is there a way to pass Array{Array{Float64,N},1 to my function? Do I even
> need to fix the type in the function to get good performance?
>
> # version of Julia: 0.4.0-rc3
>
> @generated function
> testfun1!{N}(X::Array{Array{Float64,1},1},Y::NTuple{N,Float64})
> quote
> for j in 1:$N, i in eachindex(X)
> X[i][j] = Y[j]
> end
> return X
> end
> end
>
> @generated function testfun2!{N}(X,Y::NTuple{N,Float64})
> quote
> for j in 1:$N, i in eachindex(X)
> X[i][j] = Y[j]
> end
> return X
> end
> end
>
> # Setup for function call
> InnerArrayPts = 3
> OuterArrayPts = 10
> Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
> Yinput = rand(InnerArrayPts)
>
> # Method Error Problem
> testfun1!(Xinput,tuple(Yinput...))
>
> # This works
> testfun2!(Xinput,tuple(Yinput...))
>
> I also tried with the following version of testfun1!() and again got a
> method error.
>
> @generated function
> testfun1!{N}(X::Array{Array{Float64,N},1},Y::NTuple{N,Float64})
> quote
> for j in 1:$N, i in eachindex(X)
> X[i][j] = Y[j]
> end
> return X
> end
> end
>
>
> I am sure i am misunderstanding something quite fundamental and/or missing
> something straightforward...
>
> Thanks,
> Alan


Re: [julia-users] [Doc, IJulia, rc3] `cumsum` documentation problem?

2015-09-29 Thread Yichao Yu
On Tue, Sep 29, 2015 at 10:01 AM, Sisyphuss  wrote:
> Cumulative sum along a dimension ``dim`` (defaults to 1).
> See also :func:`cumsum!` to use a preallocated output array,
> both for performance and to control the precision of the
> output (e.g. to avoid overflow).
>
> cumsum cumsum! cumsum_kbn
>

I have no idea what the "problem" you are talking about is if you
cannot be more specific.

Just taking a wield guess, you are probably talking about
https://github.com/JuliaLang/julia/issues/13047


[julia-users] `broadcast` should have returned the BitArray type?

2015-09-29 Thread Sisyphuss
[1;3] .> 2

2-element BitArray{1}:
 false
  true


=

broadcast(>,[1;3],2)

2-element Array{Int64,1}:
 0
 1


=

Is the difficulty because of type instability?




Re: [julia-users] Passing Array{Array{T,N},1} to a function

2015-09-29 Thread Jameson Nash
Note that generated function are not generally needed for that, as Tim Holy
pointed out yesterday in
https://groups.google.com/d/msg/julia-users/iX-R5luh-0M/QUeBXBLxBwAJ. That
also help avoid the downside of generated functions (they are hard to write
correctly, hard to read, slow for dispatch, may be bad for type-inference,
and have very high static memory usage).


On Tue, Sep 29, 2015 at 10:44 AM Mauro  wrote:

> > Thanks Mauro - really useful to know. So it seems that in this
> > particular instance the number of nested loops does change with N, so
> > I think the code would fall into the latter case.
>
> Sorry, I wasn't clear:  no, only the number of iterations would change.
> With numbers of nested loops I meant, say two:
>
> for i=1:n
>for j=1:m
>...
>end
> end
>
> to three
>
> for i=1:n
>for j=1:m
>for k=1:l
>...
>end
>end
> end
>
> which could be done with a generated function (look for Base.Cartesian).
>
> >> Well, it is always hard to tell what's best.  If you need certainty,
> >> then you need to benchmark.  Note that in below example, Julia will
> >> compile a specialized version of the function for each NTuple anyway.
> >> This may be good or bad, depending on how much time compilation vs
> >> running takes.
>
> And above was also not clear, I should have written:
>
> "Note that in MY example below, Julia will compile a specialized version
> of the function for each NTuple with a different N anyway."
>
> >> A case for a generated function would be if, say, the number of nested
> >> loops would change depending on some parameter, say the dimensionality
> >> of an array.
> >>
> >>> Thanks, Alan
> >>>
> >>>
> >>> was just an illustration of a the larger piece of code where I am
> making using
> >>> On 29 Sep 2015, at 13:48, Mauro  wrote:
> >>>
>  I don't think that you need nor that you should use generated
> functions.
>  But maybe I'm wrong, what are you trying to achieve?  This should work
>  as you want:
> 
>  function testfun2!{N}(X,Y::NTuple{N,Float64})
>    for i in eachindex(X), j in 1:N # much better to have the loop this
> way
>    X[i][j] = Y[j]
>    end
>    return X
>  end
> 
>  # Setup for function call
>  InnerArrayPts = 3
>  OuterArrayPts = 10
>  Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
>  Yinput = rand(InnerArrayPts)
> 
>  testfun2!(Xinput,tuple(Yinput...))
> 
>  On Tue, 2015-09-29 at 13:20, Alan Crawford 
> wrote:
> > I would like to preallocate memory of an array of arrays and pass it
> to a
> > function to be filled in. I have created an example below that
> illustrates
> > my question(s).
> >
> > Based on my (probably incorrect) understanding that it would be
> desirable
> > to fix the type in my function, I would like to be able to pass my
> array of
> > arrays, X, in a type stable way.  However, I can't seem to pass
> > Array{Array{Float64,N},1}. If, however, i do not attempt to impose
> the type
> > on the function, it works.
> >
> > Is there a way to pass Array{Array{Float64,N},1 to my function? Do I
> even
> > need to fix the type in the function to get good performance?
> >
> > # version of Julia: 0.4.0-rc3
> >
> > @generated function
> > testfun1!{N}(X::Array{Array{Float64,1},1},Y::NTuple{N,Float64})
> > quote
> > for j in 1:$N, i in eachindex(X)
> > X[i][j] = Y[j]
> > end
> > return X
> > end
> > end
> >
> > @generated function testfun2!{N}(X,Y::NTuple{N,Float64})
> > quote
> > for j in 1:$N, i in eachindex(X)
> > X[i][j] = Y[j]
> > end
> > return X
> > end
> > end
> >
> > # Setup for function call
> > InnerArrayPts = 3
> > OuterArrayPts = 10
> > Xinput = [Array{Float64}(InnerArrayPts) for r in 1:OuterArrayPts]
> > Yinput = rand(InnerArrayPts)
> >
> > # Method Error Problem
> > testfun1!(Xinput,tuple(Yinput...))
> >
> > # This works
> > testfun2!(Xinput,tuple(Yinput...))
> >
> > I also tried with the following version of testfun1!() and again got
> a
> > method error.
> >
> > @generated function
> > testfun1!{N}(X::Array{Array{Float64,N},1},Y::NTuple{N,Float64})
> > quote
> > for j in 1:$N, i in eachindex(X)
> > X[i][j] = Y[j]
> > end
> > return X
> > end
> > end
> >
> >
> > I am sure i am misunderstanding something quite fundamental and/or
> missing
> > something straightforward...
> >
> > Thanks,
> > Alan
>


Re: [julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Scott Jones


On Tuesday, September 29, 2015 at 10:14:48 AM UTC-4, Tom Breloff wrote:
>
> Leave it to Scott to make things more complicated... :)
>

Of course! :)
I just wanted it to reflect better which language might do better for a 
company, where the programmer's time may be worth a lot more than the 
execution time (both to initially implement a correct solution, but also to 
maintain it).
Generic (multi-dispatch) programming a la Julia I think can be a big win 
there - giving you maybe more of what (single dispatch) OO programming has 
always promised, but not really delivered that well, good code reuse.



Re: [julia-users] [Doc, IJulia, rc3] `cumsum` documentation problem?

2015-09-29 Thread Sisyphuss
Yes, I did talk about that.
I thought It had been solved before the release candidates.


On Tuesday, September 29, 2015 at 4:17:46 PM UTC+2, Yichao Yu wrote:
>
> On Tue, Sep 29, 2015 at 10:01 AM, Sisyphuss  > wrote: 
> > Cumulative sum along a dimension ``dim`` (defaults to 1). 
> > See also :func:`cumsum!` to use a preallocated output array, 
> > both for performance and to control the precision of the 
> > output (e.g. to avoid overflow). 
> > 
> > cumsum cumsum! cumsum_kbn 
> > 
>
> I have no idea what the "problem" you are talking about is if you 
> cannot be more specific. 
>
> Just taking a wield guess, you are probably talking about 
> https://github.com/JuliaLang/julia/issues/13047 
>


Re: [julia-users] Pkg.[update()/install()/build()] woes on Windows 10 64 bit

2015-09-29 Thread Evan Fields
So I deleted just ArrayViews from .julia, still got errors. Decided to 
delete the whole of .julia, did Pkg.add("Images"), and got this:

   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.11 (2015-07-27 06:18 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/   |  x86_64-w64-mingw32

julia> Pkg.add("Images")
INFO: Cloning cache of BinDeps from 
git://github.com/JuliaLang/BinDeps.jl.git
INFO: Cloning cache of ColorTypes from 
git://github.com/JuliaGraphics/ColorTypes.jl.git
INFO: Cloning cache of ColorVectorSpace from 
git://github.com/JuliaGraphics/ColorVectorSpace.jl.git
INFO: Cloning cache of Colors from 
git://github.com/JuliaGraphics/Colors.jl.git
INFO: Cloning cache of Compat from git://github.com/JuliaLang/Compat.jl.git
INFO: Cloning cache of Dates from git://github.com/quinnj/Dates.jl.git
INFO: Cloning cache of Docile from 
git://github.com/MichaelHatherly/Docile.jl.git
INFO: Cloning cache of FixedPointNumbers from 
git://github.com/JeffBezanson/FixedPointNumbers.jl.git
INFO: Cloning cache of Graphics from 
git://github.com/JuliaLang/Graphics.jl.git
INFO: Cloning cache of HttpCommon from 
git://github.com/JuliaWeb/HttpCommon.jl.git
INFO: Cloning cache of Images from git://github.com/timholy/Images.jl.git
INFO: Cloning cache of Reexport from 
git://github.com/simonster/Reexport.jl.git
INFO: Cloning cache of SHA from git://github.com/staticfloat/SHA.jl.git
INFO: Cloning cache of SIUnits from git://github.com/Keno/SIUnits.jl.git
INFO: Cloning cache of TexExtensions from 
git://github.com/Keno/TexExtensions.jl.git
INFO: Cloning cache of URIParser from 
git://github.com/JuliaWeb/URIParser.jl.git
INFO: Cloning cache of Zlib from git://github.com/dcjones/Zlib.jl.git
INFO: Installing BinDeps v0.3.18
INFO: Installing ColorTypes v0.1.6
INFO: Installing ColorVectorSpace v0.0.4
INFO: Installing Colors v0.5.4
INFO: Installing Compat v0.7.3
INFO: Installing Dates v0.3.2
INFO: Installing Docile v0.5.19
INFO: Installing FixedPointNumbers v0.0.11
INFO: Installing Graphics v0.1.0
INFO: Installing HttpCommon v0.1.2
INFO: Installing Images v0.4.48
INFO: Installing Reexport v0.0.3
INFO: Installing SHA v0.1.2
INFO: Installing SIUnits v0.0.5
INFO: Installing TexExtensions v0.0.2
INFO: Installing URIParser v0.0.7
INFO: Installing Zlib v0.1.10
INFO: Building Images
At line:1 char:3
+ --help
+   ~
Missing expression after unary operator '--'.
At line:1 char:3
+ --help
+   
Unexpected token 'help' in expression or statement.
+ CategoryInfo  : ParserError: (:) [], 
ParentContainsErrorRecordException
+ FullyQualifiedErrorId : MissingExpressionAfterOperator

INFO: Installing ImageMagick library
INFO: Attempting to Create directory 
C:\Users\ejfie\.julia\v0.3\Images\deps\downloads
INFO: Attempting to Create directory 
C:\Users\ejfie\.julia\v0.3\Images\deps\usr\lib\x64
INFO: Attempting to Create directory 
C:\Users\ejfie\.julia\v0.3\Images\deps\downloads
INFO: Directory C:\Users\ejfie\.julia\v0.3\Images\deps\downloads already 
created
INFO: Downloading file 
http://www.imagemagick.org/download/binaries/ImageMagick-6.9.2-3-Q16-x64-dll.exe
  % Total% Received % Xferd  Average Speed   TimeTime Time 
 Current
 Dload  Upload   Total   SpentLeft 
 Speed
100 20.0M  100 20.0M0 0  3141k  0  0:00:06  0:00:06 --:--:-- 
4439k
INFO: Done downloading file 
http://www.imagemagick.org/download/binaries/ImageMagick-6.9.2-3-Q16-x64-dll.exe
INFO: Attempting to Create directory 
C:\Users\ejfie\.julia\v0.3\Images\deps\downloads
INFO: Directory C:\Users\ejfie\.julia\v0.3\Images\deps\downloads already 
created
INFO: Downloading file 
https://bintray.com/artifact/download/julialang/generic/innounp.exe
  % Total% Received % Xferd  Average Speed   TimeTime Time 
 Current
 Dload  Upload   Total   SpentLeft 
 Speed
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:--   
  0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:--   
  0
100  592k  100  592k0 0   251k  0  0:00:02  0:00:02 --:--:-- 
 847k
INFO: Done downloading file 
https://bintray.com/artifact/download/julialang/generic/innounp.exe
INFO: Changing Directory to C:\Users\ejfie\.julia\v0.3\Images\deps\downloads
; Version detected: 5506 (Unicode)
INFO: Package database updated

julia>

But things seem to be working, at least for now.

Interesting side note: I know things aren't supposed to work this way, but 
this physical machine seems to create Julia problems. In January I had 
similar problems with installing packages that even a fresh install of 
Julia didn't solve. Since then I've formatted my hard drive and installed a 
later version of Windows, but again even a fresh 

[julia-users] ANN: Glove.jl

2015-09-29 Thread Dom Luna
Hey all,

I started this a couple of months back but ran into a couple of issues and 
sort of just had it on the backburner. I worked on it these last couple of 
days and I've gotten it to a usable state.

https://github.com/domluna/Glove.jl

I haven't done anything to make it parallel yet, that would be the next big 
performance win.

Any tips or contributions to make it better would be more than welcome :)


Re: [julia-users] How do I pass an event handle via ccall?

2015-09-29 Thread Chris Stook
That simple detailed explanation is exactly what I needed.  Thank you.


[julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Edmondo Giovannozzi
First of all I want to say that you are doing an excellent job.
I program in Fortran and python but I'm keeping an eye on Julia, I haven't 
decided to switch yet. I'm not afraid to learn a new language, so I may do 
it in the future.

Python is slow, but most of the time I use vectorized instruction (with 
numpy), it is quite fast for what is needed (elaboration of experimental 
data).  
The python matplotlib graphics package is amazing. I know that it can be 
used from Julia, but when I tried it took some seconds to import it (not  a 
big issue, but when you want to use it interactively it can).

We have also  lot of code already written in Python. That's not necessary a 
problem as once we already switched from Matlab to Python).

If the speed is needed I can use Fortran, as it is quite easy to link a 
Fortran procedure to Python with f2py.

The other pointed cited was true, when I tried to do something fast, I 
couldn't do it fast the first time (at the end I find a way to speed it up 
and was even faster than its Fortran equivalent).

A lot of programs we have are mainly simulation program already written in 
Fortran so we may have to stick to it.

There are also a couple of things, one is the use of the matlab points 
operator .*, ./, etc. Well I don't think it was really a good idea (at 
least for me). Most of the time all the operations I need are just point 
(how yo call them?) one and only few of them are really matrix operations. 
Of course it is not something that will keep me away from Julia but if 
there were ever be in future the possibility to change that behaviour 
(a daydream)
 
I also liked a lot the possibility in numpy to add an axes to an array with 
the keyword numpy.newaxis. But I'm quite sure that wouldn't be a problem to 
add it to Julia (if it's not already possible of course, I'm waiting the 
final release of Julia 0.4 in order to have a new look at the new features).

The other suggestion is to speed up vectorized instructions. The fact that 
in the manual is written to write explicit loops in places where even in 
Fortran one wouldn't have  used loops anymore is a drawback.

And last, it is psychological, but one is always waiting for the first 
stable release  (when the 1.0...?)

By the way I'm eager for the 0.4 release, I'll install it and I'l let you 
know my impressions.
  
  




[julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Daniel Carrera


On Tuesday, 29 September 2015 12:20:18 UTC+2, Tomas Lycken wrote:
>
> One thing Python does well, which Julia doesn't (yet) succeed in, is make 
> it easy to start coding from zero experience and get something that 
> executes "well enough" (although, as always with first-time coders, code 
> organization and readability might still leave some things to wish for...).
>

You don't think Julia does that?
 

> In Julia, it's very much possible to get something to run, but the 
> performance differences between well-written and not-so-well-written code 
> are *huge*.
>

But not-so-well written Julia code should be at least as fast as Python. 
Don't you think so?

 

> This means that most users will show their code to someone who knows more 
> than they do, and more likely than not get a first reaction along the lines 
> of "everything you do is wrong".
>

I don't think that this is more true for Julia than other languages. In 
Python you have to worry about indentation, and they have their own 
conventions about how things work. Every language has some learning curve. 
I think Julia's is shallow. Then again, maybe I only think that because I 
learned many languages before Julia.

 

> and even more off-putting is the fact that there's a lot of "computer 
> sciencey" stuff you need to understand to be able to grasp *why* you did it 
> wrong (type stability, difference between abstract and leaf types, 
> difference between anonymous and named functions etc).
>


You can get Python-like performance without getting computer-sciency.  But 
any program that achieves good performance requires you to learn some 
computer-sciency stuff. People who understand how computer hardware works, 
or how algorithms work, will always have some advantage over those who 
don't. I don't think any language will ever compensate for programmer 
skill. I don't think it's fair to expect Python-like effort to produce 
C-like performance.

 

> Don't get me wrong - I think Julia is doing a lot of things right, and I'm 
> glad that these "CS-y" questions are asked and handled up-front: this is 
> what gives Julia much of its power. Hopefully, much of the performance 
> difference between hacked-together-rubbish code and well-polished code will 
> be eradicated by version 1.0, and we'll see how popular Julia becomes then.
>

How do you envision that happening? I understand that compilers can get 
better, but no compiler will ever replace your algorithm by a 
cache-friendly alternative.

Cheers,
Daniel. 


[julia-users] fminbox question

2015-09-29 Thread Weichi Ding
Hi,

I'm new to the Optim package and am trying to find a minimum using bounding 
box constraints.
I wrote a function like this:

function func(x::Vector)
...
end

It works fine using optimize()

>optimize(func,[360e-12, 240e-12,15e-15])
Results of Optimization Algorithm
 * Algorithm: Nelder-Mead
 * Starting Point: [3.6e-10,2.4e-10,1.5e-14]
 * Minimum: [0.14043324412015543,-0.04701342334377595,0.000548836796380928]
 * Value of Function at Minimum: -160.00
 * Iterations: 54
 * Convergence: true
   * |x - x'| < NaN: false
   * |f(x) - f(x')| / |f(x)| < 1.0e-08: true
   * |g(x)| < NaN: false
   * Exceeded Maximum Number of Iterations: false
 * Objective Function Calls: 103
 * Gradient Call: 0

But when I try to use fminbox(), I got some error messages. Can you tell me 
what I missed? Thanks.

l=[1e-10,1e-10,1e-15]
u=[1000e-12,1000e-12,500e-15]
fminbox(func,[360e-12, 240e-12,15e-15],l,u)

ERROR: no method func(Array{Float64,1},Array{Float64,1})
 in fminbox at /home/dingw/.julia/Optim/src/fminbox.jl:138
 in fminbox at /home/dingw/.julia/Optim/src/fminbox.jl:190

>d1 = DifferentiableFunction(func)
DifferentiableFunction(func,g!,fg!)

>fminbox(d1,[360e-12, 240e-12,15e-15],l,u)

ERROR: no method 
fminbox(DifferentiableFunction,Array{Float64,1},Array{Float64,1},Array{Float64,1})





[julia-users] Re: Displaying images in Jupyter notebook

2015-09-29 Thread cormullion
thanks Steven. I looked again through Images' issues, and it might be related 
to this one: https://github.com/timholy/Images.jl/issues/237 . 

[julia-users] Metaprogramming and Dispatch

2015-09-29 Thread Matt
I want to write a method that is very similar between two types, with 
slight divergences. I'd like to write it avoiding code duplication. 

To take a simple example, let's say that the two types are Vector and 
Matrix. The method must print "I'm an array" for the two types, and then 
prints "I'm a vector"  if the object is a vector, "I'm a matrix" if the 
object is a matrix.

One way write this method is to use a "run time" dispatch

function f{T, N}(x::Array{T, N})
println("I'm an array")
if N == 1
   println("I'm a vector")
   elseif N == 2
println("I'm a matrix")
end
end
f2(rand(10, 10))
@code_warntype f2(rand(10, 10))

A way to dispatch at compile time is to define as many auxiliary function 
as there are differing lines

function _ansfunction(x::Vector)
   println("I'm a vector")
end
function _ansfunction(x::Matrix)
println("I'm a matrix")
end
function f1(x)
println("I'm an array")
_ansfunction(x)
end
f1(rand(10, 10))
@code_warntype f1(rand(10, 10))

Another solution would be to iterate through two NTuples, where N is the 
number of differing lines

for (t, x) in ((:Vector, :(println("I'm a vector"))), 
(:Matrix, :(println("I'm a Matrix"
@eval begin
function f2(x::$t)
println("I'm an array")
$x
end
end
end
f2(rand(10, 10))
@code_warntype f2(rand(10, 10))


These two last solutions work, but I really prefer the syntax of first 
solution : it allows to write the differing lines exactly at the place 
they're needed, when they're needed. It really starts to mattes when there 
are a few differing lines. Is there a syntax as clear as the first 
solution, but that "branch" at compile time?


Re: [julia-users] Escher image

2015-09-29 Thread Yakir Gagnon
Thanks!

On Tuesday, September 29, 2015 at 9:55:06 PM UTC+10, Shashi Gowda wrote:
>
> You can put the image in the assets/ directory under the directory you 
> started the server in and then show it with image("assets/img.jpg")
>
> On Tue, Sep 29, 2015, 11:50 AM Yakir Gagnon <12.y...@gmail.com 
> > wrote:
>
>> How do you show a local image (say it’s in the directory where you ran 
>> escher 
>> --serve)?
>> ​
>>
>

[julia-users] Re: Will Julia likely ever allow negative indexing of arrays

2015-09-29 Thread Luke Stagner
Nice to see more plasma physicists using Julia. 

a=0.0
>
You could just do 
a[:] = 0.0

to set all the elements to zero or you could do 
a=zeros(n)

In regards to the negative indexing I think that while negative indexing 
may not be a part of base you should be able to replicate the effect with a 
package e.g. https://github.com/simonster/TwoBasedIndexing.jl


[julia-users] Re: Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread Luke Stagner
I'm not sure what version of Julia you are using but in Julia 0.3.9 x = 
linspace(0,1,N) does return a linearly spaced N-element array of floats 
from 0-1 



On Tuesday, September 29, 2015 at 3:44:43 PM UTC-7, feza wrote:
>
> In matlab  x = linspace(0,1,n)  creates a vector of floats of length n. In 
> julia it seems like the only way to do this is to use x = collect( 
> linspace(0,1,n) ) . Is there a nicer syntax? I do mainly numeric computing 
> and I find this quite common in my code.
>
> Thanks.
>


[julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread feza
In matlab  x = linspace(0,1,n)  creates a vector of floats of length n. In 
julia it seems like the only way to do this is to use x = collect( 
linspace(0,1,n) ) . Is there a nicer syntax? I do mainly numeric computing 
and I find this quite common in my code.

Thanks.


[julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Daniel Carrera
On Tuesday, 29 September 2015 14:50:14 UTC+2, Páll Haraldsson wrote:
>
> Even if Julia were as slow as Python, it seems to be a better language - 
> more maintainable
>

Exactly! I never liked Python. I use Julia because I like the language 
itself. The fact that it is fast is actually secondary, and mainly just 
allows me to solve problems that I otherwise might have solved with Fortran 
or left unsolved. Before Julia, I used Octave. I knew that Octave was 
probably slower than NumPy, but I didn't care.


Only for fast code, not "correct" code. And 90% of code isn't 
> speed-critical. To a beginner this seems very good. First you learn the 
> basics. You can prototype and later optimize ("premature optimization is 
> the root of all evil"?).
>


Also, a lot of important problems are not speed-sensitive. I use Julia to 
analyze simulation results. I use it mainly because it is a very agile 
language; I can try to look at the data in different ways.

Cheers,
Daniel.


Re: [julia-users] Escher plotting example

2015-09-29 Thread Yakir Gagnon
while it doesn't issue any errors, the slider doesn't affect the plot...?


Yakir Gagnon
The Queensland Brain Institute (Building #79)
The University of Queensland
Brisbane QLD 4072
Australia

cell +61 (0)424 393 332
work +61 (0)733 654 089

On Wed, Sep 30, 2015 at 2:25 PM, Yakir Gagnon <12.ya...@gmail.com> wrote:

> omg, awesome! Thank you (again)!
>
>
> Yakir Gagnon
> The Queensland Brain Institute (Building #79)
> The University of Queensland
> Brisbane QLD 4072
> Australia
>
> cell +61 (0)424 393 332
> work +61 (0)733 654 089
>
> On Wed, Sep 30, 2015 at 2:17 PM, Shashi Gowda 
> wrote:
>
>> here is the change you need:
>>
>>
>> https://github.com/shashi/Escher.jl/commit/d2b5c57dd91abc74901821120f57e020809b3bb1
>>
>> On Wed, Sep 30, 2015 at 9:45 AM, Shashi Gowda 
>> wrote:
>>
>>> You need Gadfly.inch in place of inch. It's because of a  new change in
>>> Julia 0.4 - if two packages (in this case Escher and Gadfly) export two
>>> different things with the same name (in this case inch), Julia requires you
>>> to fully qualify it (such as Gadfly.inch or Escher.inch). You can use
>>> Escher.inch when setting the width / height of an Escher object and
>>> Gadfly.inch when setting the dimensions of a Gadfly plot in drawing
>>>
>>> On Wed, Sep 30, 2015 at 8:56 AM, Yakir Gagnon <12.ya...@gmail.com>
>>> wrote:
>>>
 I’m trying to use Escher as the main GUI for a program I’m writing. I
 need to replot some data every time the user changes a slider. The
 plotting.jl example seems like the best point to start at. But I get
 the following error (shown only in the browser, not at the shell): 
 UndefVarError:
 inch not defined

 Has anyone managed to plot some data and have a (say) slider update it
 in Escher?
 ​

>>>
>>>
>>
>


Re: [julia-users] ANN: Glove.jl

2015-09-29 Thread Kevin Squire
Hi Dom,

It would be useful to include a short description when announcing a
package.

Cheers,
   Kevin

On Tuesday, September 29, 2015, Dom Luna  wrote:

> Hey all,
>
> I started this a couple of months back but ran into a couple of issues and
> sort of just had it on the backburner. I worked on it these last couple of
> days and I've gotten it to a usable state.
>
> https://github.com/domluna/Glove.jl
>
> I haven't done anything to make it parallel yet, that would be the next
> big performance win.
>
> Any tips or contributions to make it better would be more than welcome :)
>


Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread Stefan Karpinski
I'm curious why you need a vector rather than an object. Do you mutate it
after creating it? Having linspace return an object instead of a vector was
a bit of a unclear judgement call so getting feedback would be good.

On Tuesday, September 29, 2015, Patrick Kofod Mogensen <
patrick.mogen...@gmail.com> wrote:

> No:
>
> julia> logspace(0,3,5)
> 5-element Array{Float64,1}:
> 1.0
> 5.62341
>31.6228
>   177.828
>  1000.0
>
> On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:
>>
>> Thats interesting. Does logspace also return a range?
>>
>> On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:
>>>
>>> In 0.4 the linspace function returns a range object, and you need to use
>>> collect() to expand it. I'm also interested in nicer syntax.
>>
>>


Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread Luke Stagner
Personally I don't like the behaviour. (Perhaps I am just too used to 
numpy/matlab convention). Also we already had linrange (although curiously 
not logrange) if we wanted to use a range.


On Tuesday, September 29, 2015 at 5:59:52 PM UTC-7, Stefan Karpinski wrote:
>
> I'm curious why you need a vector rather than an object. Do you mutate it 
> after creating it? Having linspace return an object instead of a vector was 
> a bit of a unclear judgement call so getting feedback would be good.
>
> On Tuesday, September 29, 2015, Patrick Kofod Mogensen <
> patrick@gmail.com > wrote:
>
>> No:
>>
>> julia> logspace(0,3,5)
>> 5-element Array{Float64,1}:
>> 1.0
>> 5.62341
>>31.6228 
>>   177.828  
>>  1000.0   
>>
>> On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:
>>>
>>> Thats interesting. Does logspace also return a range?
>>>
>>> On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:

 In 0.4 the linspace function returns a range object, and you need to 
 use collect() to expand it. I'm also interested in nicer syntax.
>>>
>>>

Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread Luke Stagner
A range should act (for the most part) exactly like an array. For example 
indexing into a range is identical (syntax-wise) to indexing an array. What 
I am concerned about is performance. For instance if I had a range that has 
a large amount of elements would indexing into it be slower then indexing 
into an array? Wouldn't the range have to compute the value every single 
time instead of just doing a memory lookup? Or is the calculation of 
elements trivial and the memory savings make up for it?

Performance questions aside, having linspace return a range instead of an 
array just feels like a change for changes sake. I don't see a good reason 
for displacing the behaviour of linspace for a behaviour that already 
realized in linrange. 

-Luke 

On Tuesday, September 29, 2015 at 6:13:37 PM UTC-7, Chris wrote:
>
> For me, I think I just expect a vector from experience, and I could 
> probably just change the way I work with a little effort.
>
> One exception (I think) is that I often do numerical integration over a 
> range of values, and I need the results at every value. I'm not sure if 
> there's a way to do that with range objects only.
>
> On Tue, Sep 29, 2015, 20:59 Stefan Karpinski  > wrote:
>
>> I'm curious why you need a vector rather than an object. Do you mutate it 
>> after creating it? Having linspace return an object instead of a vector was 
>> a bit of a unclear judgement call so getting feedback would be good.
>>
>> On Tuesday, September 29, 2015, Patrick Kofod Mogensen <
>> patrick@gmail.com > wrote:
>>
>>> No:
>>>
>>> julia> logspace(0,3,5)
>>> 5-element Array{Float64,1}:
>>> 1.0
>>> 5.62341
>>>31.6228 
>>>   177.828  
>>>  1000.0   
>>>
>>> On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:

 Thats interesting. Does logspace also return a range?

 On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:
>
> In 0.4 the linspace function returns a range object, and you need to 
> use collect() to expand it. I'm also interested in nicer syntax.



Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread Luke Stagner
A range should act (for the most part) exactly like an array. For example 
indexing into a range is identical (syntax-wise) to indexing an array. What 
I am concerned about is performance. For instance if I had a range that has 
a large amount of elements would indexing into it be slower then indexing 
into an array? Wouldn't the range have to compute the value every single 
time instead of just doing a memory lookup? Or is the calculation of 
elements trivial and the memory savings make up for it?

Performance questions aside, having linspace return a range instead of an 
array just feels like a change for changes sake. I don't see a good reason 
for displacing the behaviour of linspace for a behaviour that already 
realized in linrange. 

-Luke 

On Tuesday, September 29, 2015 at 6:13:37 PM UTC-7, Chris wrote:
>
> For me, I think I just expect a vector from experience, and I could 
> probably just change the way I work with a little effort.
>
> One exception (I think) is that I often do numerical integration over a 
> range of values, and I need the results at every value. I'm not sure if 
> there's a way to do that with range objects only.
>
> On Tue, Sep 29, 2015, 20:59 Stefan Karpinski  > wrote:
>
>> I'm curious why you need a vector rather than an object. Do you mutate it 
>> after creating it? Having linspace return an object instead of a vector was 
>> a bit of a unclear judgement call so getting feedback would be good.
>>
>> On Tuesday, September 29, 2015, Patrick Kofod Mogensen <
>> patrick@gmail.com > wrote:
>>
>>> No:
>>>
>>> julia> logspace(0,3,5)
>>> 5-element Array{Float64,1}:
>>> 1.0
>>> 5.62341
>>>31.6228 
>>>   177.828  
>>>  1000.0   
>>>
>>> On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:

 Thats interesting. Does logspace also return a range?

 On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:
>
> In 0.4 the linspace function returns a range object, and you need to 
> use collect() to expand it. I'm also interested in nicer syntax.



Re: [julia-users] restricting the type of bitstype type parameters

2015-09-29 Thread Tim Holy
Yes, the syntax you proposed already works. Try it out.

--Tim

On Tuesday, September 29, 2015 06:42:36 PM Tom Lee wrote:
> Is there any way to restrict a type parameter to a particular bitstype, or
> is this planned in the next type design iteration? eg:
> 
> type FixedVector{T<:Number, n::Integer}
> data::NTuple{n,T}
> end
> 
> 
> You can achieve this by adding checks in the constructor, but doing so
> always makes things so untidy.



Re: [julia-users] Metaprogramming and Dispatch

2015-09-29 Thread Stefan Karpinski
The branch on the first version will be eliminated. The second version is
far more idiomatic, however. Consider that the second version is trivially
extensible to more types while the first version cannot be extended at all.

On Tuesday, September 29, 2015, Matt  wrote:

> I want to write a method that is very similar between two types, with
> slight divergences. I'd like to write it avoiding code duplication.
>
> To take a simple example, let's say that the two types are Vector and
> Matrix. The method must print "I'm an array" for the two types, and then
> prints "I'm a vector"  if the object is a vector, "I'm a matrix" if the
> object is a matrix.
>
> One way write this method is to use a "run time" dispatch
>
> function f{T, N}(x::Array{T, N})
> println("I'm an array")
> if N == 1
>println("I'm a vector")
>elseif N == 2
> println("I'm a matrix")
> end
> end
> f2(rand(10, 10))
> @code_warntype f2(rand(10, 10))
>
> A way to dispatch at compile time is to define as many auxiliary function
> as there are differing lines
>
> function _ansfunction(x::Vector)
>println("I'm a vector")
> end
> function _ansfunction(x::Matrix)
> println("I'm a matrix")
> end
> function f1(x)
> println("I'm an array")
> _ansfunction(x)
> end
> f1(rand(10, 10))
> @code_warntype f1(rand(10, 10))
>
> Another solution would be to iterate through two NTuples, where N is the
> number of differing lines
>
> for (t, x) in ((:Vector, :(println("I'm a vector"))),
> (:Matrix, :(println("I'm a Matrix"
> @eval begin
> function f2(x::$t)
> println("I'm an array")
> $x
> end
> end
> end
> f2(rand(10, 10))
> @code_warntype f2(rand(10, 10))
>
>
> These two last solutions work, but I really prefer the syntax of first
> solution : it allows to write the differing lines exactly at the place
> they're needed, when they're needed. It really starts to mattes when there
> are a few differing lines. Is there a syntax as clear as the first
> solution, but that "branch" at compile time?
>


[julia-users] Re: ANN: Glove.jl

2015-09-29 Thread Dom Luna
Thanks Kevin. This slipped my mind.

Glove (or rather GloVe, Glove is just easier to type) stands for Global 
Word Vectors. The package implements the algorithm in 
described http://nlp.stanford.edu/projects/glove/. The idea is to represent 
words as a vector of floats that capture word similarities. For example, 
"king - man + woman = queen" (operating on word vectors). The other popular 
implementation is word2vec, https://code.google.com/p/word2vec/.


Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread feza
Strange it *was* giving me an error saying deprecated and that I should use 
collect, but now it's fine.

On Tuesday, September 29, 2015 at 10:28:12 PM UTC-4, Sheehan Olver wrote:
>
> fez, I'm pretty sure the code works fine without the collect: when exp is 
> called on linspace it converts it to a vector.  Though the returned t will 
> be linspace object.
>
> On Wednesday, September 30, 2015 at 12:10:55 PM UTC+10, feza wrote:
>>
>> Here's the code I was using where I needed to use collect (I've been 
>> playing around with Julia, so any suggestions on this code for perf is 
>> welcome ;) ) . In general linspace (or the : notation)  is also used 
>> commonly to lay  a grid in space for solving a PDE for some other use 
>> cases. 
>>
>> function gp(n)
>> n = convert(Int,n)
>> t0 = 0
>> tf = 5
>> t = collect( linspace(t0, tf, n+1) )
>> sigma = exp( -(t - t[1]) )
>>
>> c = [sigma; sigma[(end-1):-1:2]]
>> lambda = fft(c)
>> eta = sqrt(lambda./(2*n))
>>
>> Z = randn(2*n) + im*randn(2*n)
>> x = real( fft( Z.*eta ) )
>> return (x, t)
>> end
>>
>>
>> On Tuesday, September 29, 2015 at 8:59:52 PM UTC-4, Stefan Karpinski 
>> wrote:
>>>
>>> I'm curious why you need a vector rather than an object. Do you mutate 
>>> it after creating it? Having linspace return an object instead of a 
>>> vector was a bit of a unclear judgement call so getting feedback would 
>>> be good.
>>>
>>> On Tuesday, September 29, 2015, Patrick Kofod Mogensen <
>>> patrick@gmail.com> wrote:
>>>
 No:

 julia> logspace(0,3,5)
 5-element Array{Float64,1}:
 1.0
 5.62341
31.6228 
   177.828  
  1000.0   

 On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:
>
> Thats interesting. Does logspace also return a range?
>
> On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:
>>
>> In 0.4 the linspace function returns a range object, and you need to 
>> use collect() to expand it. I'm also interested in nicer syntax.
>
>

[julia-users] Re: Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread Chris
In 0.4 the linspace function returns a range object, and you need to use 
collect() to expand it. I'm also interested in nicer syntax.

Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread Chris
For me, I think I just expect a vector from experience, and I could
probably just change the way I work with a little effort.

One exception (I think) is that I often do numerical integration over a
range of values, and I need the results at every value. I'm not sure if
there's a way to do that with range objects only.

On Tue, Sep 29, 2015, 20:59 Stefan Karpinski  wrote:

> I'm curious why you need a vector rather than an object. Do you mutate it
> after creating it? Having linspace return an object instead of a vector was
> a bit of a unclear judgement call so getting feedback would be good.
>
> On Tuesday, September 29, 2015, Patrick Kofod Mogensen <
> patrick.mogen...@gmail.com> wrote:
>
>> No:
>>
>> julia> logspace(0,3,5)
>> 5-element Array{Float64,1}:
>> 1.0
>> 5.62341
>>31.6228
>>   177.828
>>  1000.0
>>
>> On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:
>>>
>>> Thats interesting. Does logspace also return a range?
>>>
>>> On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:

 In 0.4 the linspace function returns a range object, and you need to
 use collect() to expand it. I'm also interested in nicer syntax.
>>>
>>>


Re: [julia-users] Metaprogramming and Dispatch

2015-09-29 Thread Jameson Nash
As Stefan mentioned, the `@generated` function below is equivalent to the
first function you posted, except that it is worse style since it is not
extensible, will generally require more memory, it is less readable (and it
seems to be giving the wrong answer for N > 2 and N == 0?)

The Julia-style is to use the second function that you posted, since it is
extensible and provides an appropriate separation (abstraction) between
behavior and implementation details.

The `@eval`-style function is used heavily in Base to generate a variety of
ccall's that can be heavily templated. It doesn't make sense to use the
usual Julia-style for these functions since the c-functions themselves are
not extensible. So this style code is most appropriate when your code is
providing language interoperability with a static language. It isn't
particularly well suited for writing code within Julia since it is rather
verbose, the compile-time and runtime constructs get mixed rather awkwardly
in the same code, and some essential parts of the code (in this case `t`),
get separated pretty far from their use sites


On Tue, Sep 29, 2015 at 9:17 PM Matt  wrote:

> Thanks for answering. Sorry I deleted my question after finding what I
> wanted. If some people are interested, I like the two solutions below:
>
> for t in (Vector, Matrix)
> @eval begin
> function f2(x::$t)
> println("I'm an array")
> $(t  == Vector ? :(println("I'm a vector")) :  :(println("I'm
> a matrix")))
> end
> end
> end
>
> or even
>
> @generated function f{T, N}(x::Array{T, N})
> quote
> println("I'm an array")
> $(N == 1 ? :(println("I'm a vector")) :  :(println("I'm a
> matrix")))
> end
> end
>
>
>
> On Tuesday, September 29, 2015 at 8:57:10 PM UTC-4, Stefan Karpinski wrote:
>
>> The branch on the first version will be eliminated. The second version is
>> far more idiomatic, however. Consider that the second version is trivially
>> extensible to more types while the first version cannot be extended at all.
>>
>> On Tuesday, September 29, 2015, Matt  wrote:
>>
>>> I want to write a method that is very similar between two types, with
>>> slight divergences. I'd like to write it avoiding code duplication.
>>>
>>> To take a simple example, let's say that the two types are Vector and
>>> Matrix. The method must print "I'm an array" for the two types, and then
>>> prints "I'm a vector"  if the object is a vector, "I'm a matrix" if the
>>> object is a matrix.
>>>
>>> One way write this method is to use a "run time" dispatch
>>>
>>> function f{T, N}(x::Array{T, N})
>>> println("I'm an array")
>>> if N == 1
>>>println("I'm a vector")
>>>elseif N == 2
>>> println("I'm a matrix")
>>> end
>>> end
>>> f2(rand(10, 10))
>>> @code_warntype f2(rand(10, 10))
>>>
>>> A way to dispatch at compile time is to define as many auxiliary
>>> function as there are differing lines
>>>
>>> function _ansfunction(x::Vector)
>>>println("I'm a vector")
>>> end
>>> function _ansfunction(x::Matrix)
>>> println("I'm a matrix")
>>> end
>>> function f1(x)
>>> println("I'm an array")
>>> _ansfunction(x)
>>> end
>>> f1(rand(10, 10))
>>> @code_warntype f1(rand(10, 10))
>>>
>>> Another solution would be to iterate through two NTuples, where N is the
>>> number of differing lines
>>>
>>> for (t, x) in ((:Vector, :(println("I'm a vector"))),
>>> (:Matrix, :(println("I'm a Matrix"
>>> @eval begin
>>> function f2(x::$t)
>>> println("I'm an array")
>>> $x
>>> end
>>> end
>>> end
>>> f2(rand(10, 10))
>>> @code_warntype f2(rand(10, 10))
>>>
>>>
>>> These two last solutions work, but I really prefer the syntax of first
>>> solution : it allows to write the differing lines exactly at the place
>>> they're needed, when they're needed. It really starts to mattes when there
>>> are a few differing lines. Is there a syntax as clear as the first
>>> solution, but that "branch" at compile time?
>>>
>>


[julia-users] Escher plotting example

2015-09-29 Thread Yakir Gagnon


I’m trying to use Escher as the main GUI for a program I’m writing. I need 
to replot some data every time the user changes a slider. The plotting.jl 
example seems like the best point to start at. But I get the following 
error (shown only in the browser, not at the shell): UndefVarError: inch 
not defined

Has anyone managed to plot some data and have a (say) slider update it in 
Escher?
​


Re: [julia-users] Escher plotting example

2015-09-29 Thread Yakir Gagnon
omg, awesome! Thank you (again)!


Yakir Gagnon
The Queensland Brain Institute (Building #79)
The University of Queensland
Brisbane QLD 4072
Australia

cell +61 (0)424 393 332
work +61 (0)733 654 089

On Wed, Sep 30, 2015 at 2:17 PM, Shashi Gowda 
wrote:

> here is the change you need:
>
>
> https://github.com/shashi/Escher.jl/commit/d2b5c57dd91abc74901821120f57e020809b3bb1
>
> On Wed, Sep 30, 2015 at 9:45 AM, Shashi Gowda 
> wrote:
>
>> You need Gadfly.inch in place of inch. It's because of a  new change in
>> Julia 0.4 - if two packages (in this case Escher and Gadfly) export two
>> different things with the same name (in this case inch), Julia requires you
>> to fully qualify it (such as Gadfly.inch or Escher.inch). You can use
>> Escher.inch when setting the width / height of an Escher object and
>> Gadfly.inch when setting the dimensions of a Gadfly plot in drawing
>>
>> On Wed, Sep 30, 2015 at 8:56 AM, Yakir Gagnon <12.ya...@gmail.com> wrote:
>>
>>> I’m trying to use Escher as the main GUI for a program I’m writing. I
>>> need to replot some data every time the user changes a slider. The
>>> plotting.jl example seems like the best point to start at. But I get
>>> the following error (shown only in the browser, not at the shell): 
>>> UndefVarError:
>>> inch not defined
>>>
>>> Has anyone managed to plot some data and have a (say) slider update it
>>> in Escher?
>>> ​
>>>
>>
>>
>


[julia-users] Re: ANN: FileIO.jl, for loading and saving formatted files

2015-09-29 Thread Patrick Kofod Mogensen
As Stefan - I love it. Imo, stuff like this really enhances the user 
experience and productivity.

On Tuesday, September 29, 2015 at 3:24:13 PM UTC-4, Tim Holy wrote:
>
> Simon Danisch and I are pleased to introduce FileIO.jl 
> (https://github.com/JuliaIO/FileIO.jl), a package for centralizing the 
> handling of formatted files. The package contains utilities for querying 
> files/filenames by their extensions and/or magic bytes to deduce their 
> content. 
> It then allows package authors to "register" loaders/savers for detected 
> file 
> formats. 
>
> Some of the advantages of using FileIO are: 
>
> - For users, this may reduce the searching required to answer "is there a 
> julia package to load file X?" Now you can just try `load("X.ext")` and 
> see 
> what happens---if the loader has been registered, it will try to find the 
> right 
> package for you. 
>
> - For developers, this allows you to decouple your package from others. If 
> you 
> want to work with images, you will (once I tag a new release of Images.jl) 
> no 
> longer need to make the Images package a dependency as long as you use 
> FileIO. 
> In your package code, you can say `load("myimage.png")` and everything 
> should 
> Just Work. 
>
> - For users and developers, this reduces name conflicts among two 
> attractive 
> function names, `load` and `save`. If everyone imports those names from 
> FileIO, there need be no conflicts. 
>
> The package is relatively small and is fairly extensively documented. 
>
> While there are not a large number of formats currently registered, the 
> architecture already shows a lot of promise. For example, it can recognize 
> our 
> .jld files as HDF5 files, something that escapes the well-tested (and much 
> more 
> extensive) Unix `file` command. 
>
> Best, 
> --Tim & Simon 
>
>

Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread elextr


On Wednesday, September 30, 2015 at 11:37:44 AM UTC+10, Luke Stagner wrote:
>
> A range should act (for the most part) exactly like an array. For example 
> indexing into a range is identical (syntax-wise) to indexing an array. What 
> I am concerned about is performance. For instance if I had a range that has 
> a large amount of elements would indexing into it be slower then indexing 
> into an array? Wouldn't the range have to compute the value every single 
> time instead of just doing a memory lookup? Or is the calculation of 
> elements trivial and the memory savings make up for it?
>

If the memory lookup isn't cached it is likely to be much slower than the 
calculation which is in core.  Of course if you use the same value 
repeatedly the optimiser might avoid re-calculating it, and it might avoid 
re-reading the memory.  Only benchmarks of your particular problem can tell.

And of course a range can be bigger than memory.
 

>
> Performance questions aside, having linspace return a range instead of an 
> array just feels like a change for changes sake. I don't see a good reason 
> for displacing the behaviour of linspace for a behaviour that already 
> realized in linrange. 
>

> -Luke 
>
> On Tuesday, September 29, 2015 at 6:13:37 PM UTC-7, Chris wrote:
>>
>> For me, I think I just expect a vector from experience, and I could 
>> probably just change the way I work with a little effort.
>>
>> One exception (I think) is that I often do numerical integration over a 
>> range of values, and I need the results at every value. I'm not sure if 
>> there's a way to do that with range objects only.
>>
>> On Tue, Sep 29, 2015, 20:59 Stefan Karpinski  
>> wrote:
>>
>>> I'm curious why you need a vector rather than an object. Do you mutate 
>>> it after creating it? Having linspace return an object instead of a 
>>> vector was a bit of a unclear judgement call so getting feedback would 
>>> be good.
>>>
>>> On Tuesday, September 29, 2015, Patrick Kofod Mogensen <
>>> patrick@gmail.com> wrote:
>>>
 No:

 julia> logspace(0,3,5)
 5-element Array{Float64,1}:
 1.0
 5.62341
31.6228 
   177.828  
  1000.0   

 On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:
>
> Thats interesting. Does logspace also return a range?
>
> On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:
>>
>> In 0.4 the linspace function returns a range object, and you need to 
>> use collect() to expand it. I'm also interested in nicer syntax.
>
>

Re: [julia-users] Metaprogramming and Dispatch

2015-09-29 Thread Matt
Hey,

Sorry I'm not sure what you're referring too.

Just to clarify. I have written 3 functions in the first subject. The first 
function uses run time dispatch,  the second use sub-auxiliary functions at 
every differing line, the third uses the @eval style. I like the syntax of 
the first function, but I want the performance of the second or third 
function (ie compile time dispatch)
In my second message, have posted a 4th and 5th way, with a syntax close to 
the first function (that I like), but with compile-time dispatch. The 4th 
method is just another way to write the 3rd method. The 5th method uses 
@generated, and I understand it may be overkill (at least for the simple 
example!).

I'm not sure which one you are advising me to use. I think you're referring 
to the second one, right? But I really dont like the idea of defining new 
functions for every differing line between the two methods  (including 
finding meaningful new function names). Another issue is that, if a 
diverging line involves a lot of local variables, all of them must be 
passed to the function (including typically a loop counter), which makes 
the whole thing unreadable. Thanks for the explanation of the @eval use in 
Base, I did not realize this.


On Tuesday, September 29, 2015 at 10:33:07 PM UTC-4, Jameson wrote:
>
> As Stefan mentioned, the `@generated` function below is equivalent to the 
> first function you posted, except that it is worse style since it is not 
> extensible, will generally require more memory, it is less readable (and it 
> seems to be giving the wrong answer for N > 2 and N == 0?)
>
> The Julia-style is to use the second function that you posted, since it is 
> extensible and provides an appropriate separation (abstraction) between 
> behavior and implementation details.
>
> The `@eval`-style function is used heavily in Base to generate a variety 
> of ccall's that can be heavily templated. It doesn't make sense to use the 
> usual Julia-style for these functions since the c-functions themselves are 
> not extensible. So this style code is most appropriate when your code is 
> providing language interoperability with a static language. It isn't 
> particularly well suited for writing code within Julia since it is rather 
> verbose, the compile-time and runtime constructs get mixed rather awkwardly 
> in the same code, and some essential parts of the code (in this case `t`), 
> get separated pretty far from their use sites
>
>
> On Tue, Sep 29, 2015 at 9:17 PM Matt  
> wrote:
>
>> Thanks for answering. Sorry I deleted my question after finding what I 
>> wanted. If some people are interested, I like the two solutions below:
>>
>> for t in (Vector, Matrix)
>> @eval begin
>> function f2(x::$t)
>> println("I'm an array")
>> $(t  == Vector ? :(println("I'm a vector")) :  :(println("I'm 
>> a matrix")))
>> end
>> end
>> end
>>
>> or even
>>
>> @generated function f{T, N}(x::Array{T, N})
>> quote
>> println("I'm an array")
>> $(N == 1 ? :(println("I'm a vector")) :  :(println("I'm a 
>> matrix")))
>> end
>> end
>>
>>
>>
>> On Tuesday, September 29, 2015 at 8:57:10 PM UTC-4, Stefan Karpinski 
>> wrote:
>>
>>> The branch on the first version will be eliminated. The second version 
>>> is far more idiomatic, however. Consider that the second version is 
>>> trivially extensible to more types while the first version cannot be 
>>> extended at all.
>>>
>>> On Tuesday, September 29, 2015, Matt  wrote:
>>>
 I want to write a method that is very similar between two types, with 
 slight divergences. I'd like to write it avoiding code duplication. 

 To take a simple example, let's say that the two types are Vector and 
 Matrix. The method must print "I'm an array" for the two types, and then 
 prints "I'm a vector"  if the object is a vector, "I'm a matrix" if the 
 object is a matrix.

 One way write this method is to use a "run time" dispatch

 function f{T, N}(x::Array{T, N})
 println("I'm an array")
 if N == 1
println("I'm a vector")
elseif N == 2
 println("I'm a matrix")
 end
 end
 f2(rand(10, 10))
 @code_warntype f2(rand(10, 10))

 A way to dispatch at compile time is to define as many auxiliary 
 function as there are differing lines

 function _ansfunction(x::Vector)
println("I'm a vector")
 end
 function _ansfunction(x::Matrix)
 println("I'm a matrix")
 end
 function f1(x)
 println("I'm an array")
 _ansfunction(x)
 end
 f1(rand(10, 10))
 @code_warntype f1(rand(10, 10))

 Another solution would be to iterate through two NTuples, where N is 
 the number of differing lines

 for (t, x) in ((:Vector, :(println("I'm a vector"))), 
 (:Matrix, :(println("I'm a Matrix"
   

[julia-users] Re: Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread Patrick Kofod Mogensen
Is it because you use it many times? just define a clinspace that collects 
the linspace.

On Tuesday, September 29, 2015 at 6:44:43 PM UTC-4, feza wrote:
>
> In matlab  x = linspace(0,1,n)  creates a vector of floats of length n. In 
> julia it seems like the only way to do this is to use x = collect( 
> linspace(0,1,n) ) . Is there a nicer syntax? I do mainly numeric computing 
> and I find this quite common in my code.
>
> Thanks.
>


Re: [julia-users] Metaprogramming and Dispatch

2015-09-29 Thread Matt
Thanks for answering. Sorry I deleted my question after finding what I 
wanted. If some people are interested, I like the two solutions below:

for t in (Vector, Matrix)
@eval begin
function f2(x::$t)
println("I'm an array")
$(t  == Vector ? :(println("I'm a vector")) :  :(println("I'm a 
matrix")))
end
end
end

or even

@generated function f{T, N}(x::Array{T, N})
quote
println("I'm an array")
$(N == 1 ? :(println("I'm a vector")) :  :(println("I'm a matrix")))
end
end



On Tuesday, September 29, 2015 at 8:57:10 PM UTC-4, Stefan Karpinski wrote:
>
> The branch on the first version will be eliminated. The second version is 
> far more idiomatic, however. Consider that the second version is trivially 
> extensible to more types while the first version cannot be extended at all.
>
> On Tuesday, September 29, 2015, Matt  
> wrote:
>
>> I want to write a method that is very similar between two types, with 
>> slight divergences. I'd like to write it avoiding code duplication. 
>>
>> To take a simple example, let's say that the two types are Vector and 
>> Matrix. The method must print "I'm an array" for the two types, and then 
>> prints "I'm a vector"  if the object is a vector, "I'm a matrix" if the 
>> object is a matrix.
>>
>> One way write this method is to use a "run time" dispatch
>>
>> function f{T, N}(x::Array{T, N})
>> println("I'm an array")
>> if N == 1
>>println("I'm a vector")
>>elseif N == 2
>> println("I'm a matrix")
>> end
>> end
>> f2(rand(10, 10))
>> @code_warntype f2(rand(10, 10))
>>
>> A way to dispatch at compile time is to define as many auxiliary function 
>> as there are differing lines
>>
>> function _ansfunction(x::Vector)
>>println("I'm a vector")
>> end
>> function _ansfunction(x::Matrix)
>> println("I'm a matrix")
>> end
>> function f1(x)
>> println("I'm an array")
>> _ansfunction(x)
>> end
>> f1(rand(10, 10))
>> @code_warntype f1(rand(10, 10))
>>
>> Another solution would be to iterate through two NTuples, where N is the 
>> number of differing lines
>>
>> for (t, x) in ((:Vector, :(println("I'm a vector"))), 
>> (:Matrix, :(println("I'm a Matrix"
>> @eval begin
>> function f2(x::$t)
>> println("I'm an array")
>> $x
>> end
>> end
>> end
>> f2(rand(10, 10))
>> @code_warntype f2(rand(10, 10))
>>
>>
>> These two last solutions work, but I really prefer the syntax of first 
>> solution : it allows to write the differing lines exactly at the place 
>> they're needed, when they're needed. It really starts to mattes when there 
>> are a few differing lines. Is there a syntax as clear as the first 
>> solution, but that "branch" at compile time?
>>
>

Re: [julia-users] Escher plotting example

2015-09-29 Thread Shashi Gowda
here is the change you need:

https://github.com/shashi/Escher.jl/commit/d2b5c57dd91abc74901821120f57e020809b3bb1

On Wed, Sep 30, 2015 at 9:45 AM, Shashi Gowda 
wrote:

> You need Gadfly.inch in place of inch. It's because of a  new change in
> Julia 0.4 - if two packages (in this case Escher and Gadfly) export two
> different things with the same name (in this case inch), Julia requires you
> to fully qualify it (such as Gadfly.inch or Escher.inch). You can use
> Escher.inch when setting the width / height of an Escher object and
> Gadfly.inch when setting the dimensions of a Gadfly plot in drawing
>
> On Wed, Sep 30, 2015 at 8:56 AM, Yakir Gagnon <12.ya...@gmail.com> wrote:
>
>> I’m trying to use Escher as the main GUI for a program I’m writing. I
>> need to replot some data every time the user changes a slider. The
>> plotting.jl example seems like the best point to start at. But I get the
>> following error (shown only in the browser, not at the shell): UndefVarError:
>> inch not defined
>>
>> Has anyone managed to plot some data and have a (say) slider update it in
>> Escher?
>> ​
>>
>
>


[julia-users] julia unikernel - rumprun

2015-09-29 Thread Martin Somers
Hi
Just wondering if anyone might have any insight if this might be possible 
to do with Julia - ie run Julia as a unikernel under rumprun

https://gandro.github.io/2015/09/27/rust-on-rumprun/

M


[julia-users] Re: Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread Patrick Kofod Mogensen
No:

julia> logspace(0,3,5)
5-element Array{Float64,1}:
1.0
5.62341
   31.6228 
  177.828  
 1000.0   

On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:
>
> Thats interesting. Does logspace also return a range?
>
> On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:
>>
>> In 0.4 the linspace function returns a range object, and you need to use 
>> collect() to expand it. I'm also interested in nicer syntax.
>
>

[julia-users] restricting the type of bitstype type parameters

2015-09-29 Thread Tom Lee

Is there any way to restrict a type parameter to a particular bitstype, or 
is this planned in the next type design iteration? eg:

type FixedVector{T<:Number, n::Integer}
data::NTuple{n,T}
end


You can achieve this by adding checks in the constructor, but doing so 
always makes things so untidy.


[julia-users] Package badge URLs for nightly build

2015-09-29 Thread Júlio Hoffimann
Hi,

Could you please confirm the badge URLs for nightly builds are obsolete and 
that I have to use a specific Julia version 
now? https://github.com/juliohm/ImageQuilting.jl/blob/master/README.md

-Júlio


Re: [julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Christoph Ortner

I prefer Julia to Python for many reasons, not only performance, but purely 
regarding performance I often wondered: 

How much is the poor Python performance just a current state. In fact NUMBA 
can often take care if it in a very low-effort way. 
 (a) What is numba lacking now, compared to Julia (performance-wise)
 (b) Would it be realistic to assume that a "Python 4" might be built on 
top of similar architecture as Julia and hence yield similar performance? 
Or is there something special about Julia's design?

Thanks,
   Christoph



[julia-users] ANN: FileIO.jl, for loading and saving formatted files

2015-09-29 Thread Tim Holy
Simon Danisch and I are pleased to introduce FileIO.jl 
(https://github.com/JuliaIO/FileIO.jl), a package for centralizing the 
handling of formatted files. The package contains utilities for querying 
files/filenames by their extensions and/or magic bytes to deduce their content. 
It then allows package authors to "register" loaders/savers for detected file 
formats.

Some of the advantages of using FileIO are:

- For users, this may reduce the searching required to answer "is there a 
julia package to load file X?" Now you can just try `load("X.ext")` and see 
what happens---if the loader has been registered, it will try to find the right 
package for you.

- For developers, this allows you to decouple your package from others. If you 
want to work with images, you will (once I tag a new release of Images.jl) no 
longer need to make the Images package a dependency as long as you use FileIO. 
In your package code, you can say `load("myimage.png")` and everything should 
Just Work.

- For users and developers, this reduces name conflicts among two attractive 
function names, `load` and `save`. If everyone imports those names from 
FileIO, there need be no conflicts.

The package is relatively small and is fairly extensively documented.

While there are not a large number of formats currently registered, the 
architecture already shows a lot of promise. For example, it can recognize our 
.jld files as HDF5 files, something that escapes the well-tested (and much more 
extensive) Unix `file` command.

Best,
--Tim & Simon



[julia-users] Re: Will Julia likely ever allow negative indexing of arrays

2015-09-29 Thread Mark Sherlock
Thanks for all the replies, I'll check the links provided.
Just to give a summary of the reasons As John Gibson pointed out 
Fourier expansions would be one example.

When solving PDE's we use "ghost cells" to provide boundary conditions. A 
physical variable, let's say density, could be represented in space by a 3D 
array,
with each spatial index (ix,iy,iz) taking on values like ix=1..nx, 
iy=1..ny, iz=1..nz etc. If the boundary is, say, reflective, then you need 
to set the value of the density in
the part of space you can't "see", e.g. in the regions ix=-2..0 and 
ix=nx+1..nx+3

Obviously this can be handled by offsetting each array index by some 
amount. The problem is mainly that what we have written down on paper, in 
books and papers on
numerical analysis etc are numerical equations which naturally count from 1 
to n along some dimension. Couple that to the fact that many codes rely on 
arrays that
consist of mixtures of real space (3D say) and special function expansions, 
with Fourier expansion being one example.
I for example use a 7-dimensional array to describe some problems (3 
dimensions for real space, and 3 dimensions for velocity space). Real space 
is not expanded,
so it counts from 1..n naturally, while velocity space has 2 dimensions 
expanded with one type of series (counting from 0..n) and the other 
dimension expanded in
a different series (counting from -n..n). The final seventh dimension 
naturally has only 2 indices 0 and 1 (which accounts nicely for the real 
and imaginary parts of the function).

Within each array loop, it is nice to be able to use if statements that 
correspond to what you have written down on paper, e.g.:

if(in>0) then blah
if(im<1) then something else

or something like:

do i=0,1
do im=-mmax,mmax
do in=max(im,0),nmax
do ir=1,nr
do ix=1,nx
do iy=1,ny
do iz=2,nz-1

blah

enddo
enddo
enddo
enddo
enddo
enddo
enddo

(incidentally, imagine how many tabs would be required to write the above 
in python!).

Within a code, there may be many such loops and structures with different 
looping conditions.

When you have a code with 2-4 lines that you want to update to a 
new language, having to change all of these to account for offsetting is 
too much work for anyone to seriously consider.
Not to mention the fact you can no longer easily code up the difference 
equations you have written on paper (and possible spent months creating). 
That in a nutshell is why nobody is switching from 
Fortran (even the young guys are writing new codes in Fortran). There are 
other nice things about Fortran arrays, such as the ability to quickly set 
all elements of an array to zero with:

a=0.0

I hope that sort of makes it clear the type of problems we deal with?!

Cheers


Re: [julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Scott Jones
Well, I do think there is a lot special about Julia's design that could not 
be added on to Python - the macro programming, multiple dispatch 
capabilities and the powerful type system (hopefully with traits built into 
the language in the future!) are what most set it apart
from other languages, in my mind.
Those characteristics make it possible to write a very small amount of 
code, but have the compiler do the work of making highly optimized versions 
of a function - and do so at compile-time, not run-time (and if a function 
is called with types that it hasn't seen before,
well, it's JITed, so you still have the dynamic nature that makes languages 
like Python so appealing to people).  Best of both worlds, IMO.
Keno's work is also making it more and more easy to directly call into C++, 
so if you have big libraries of optimized C++ code, you can just use them.

All good! (Yes, there's a lot more work that needs to be done, but the 
fundamentals of Julia are superb).

Scott

On Tuesday, September 29, 2015 at 2:12:17 PM UTC-4, Christoph Ortner wrote:
>
>
> I prefer Julia to Python for many reasons, not only performance, but 
> purely regarding performance I often wondered: 
>
> How much is the poor Python performance just a current state. In fact 
> NUMBA can often take care if it in a very low-effort way. 
>  (a) What is numba lacking now, compared to Julia (performance-wise)
>  (b) Would it be realistic to assume that a "Python 4" might be built on 
> top of similar architecture as Julia and hence yield similar performance? 
> Or is there something special about Julia's design?
>
> Thanks,
>Christoph
>
>

[julia-users] Re: What's the reason of the Success of Python?

2015-09-29 Thread Steven G. Johnson


On Tuesday, September 29, 2015 at 12:42:04 PM UTC-4, Edmondo Giovannozzi 
wrote:
>
> First of all I want to say that you are doing an excellent job.
> I program in Fortran and python but I'm keeping an eye on Julia, I haven't 
> decided to switch yet. I'm not afraid to learn a new language, so I may do 
> it in the future.
>
> Python is slow, but most of the time I use vectorized instruction (with 
> numpy), it is quite fast for what is needed (elaboration of experimental 
> data).  
> The python matplotlib graphics package is amazing. I know that it can be 
> used from Julia, but when I tried it took some seconds to import it (not  a 
> big issue, but when you want to use it interactively it can).
>

(It's a lot faster now that we have precompiled packages.  Down from 30 
seconds to 4 seconds on my machine.   Future improvements in precompilation 
should bring this down further, although there is a limit because it needs 
to do runtime initialization of Python and Matplotlib, and even in Python 
it takes more than 1s to load Matplotlib on my machine.)

 

> We have also  lot of code already written in Python. That's not necessary 
> a problem as once we already switched from Matlab to Python).
>
> If the speed is needed I can use Fortran, as it is quite easy to link a 
> Fortran procedure to Python with f2py.
>
> The other pointed cited was true, when I tried to do something fast, I 
> couldn't do it fast the first time (at the end I find a way to speed it up 
> and was even faster than its Fortran equivalent).
>
> A lot of programs we have are mainly simulation program already written in 
> Fortran so we may have to stick to it.
>
> There are also a couple of things, one is the use of the matlab points 
> operator .*, ./, etc. Well I don't think it was really a good idea (at 
> least for me). Most of the time all the operations I need are just point 
> (how yo call them?) one and only few of them are really matrix operations. 
> Of course it is not something that will keep me away from Julia but if 
> there were ever be in future the possibility to change that behaviour 
> (a daydream)
>  
> I also liked a lot the possibility in numpy to add an axes to an array 
> with the keyword numpy.newaxis. But I'm quite sure that wouldn't be a 
> problem to add it to Julia (if it's not already possible of course, I'm 
> waiting the final release of Julia 0.4 in order to have a new look at the 
> new features).
>
> The other suggestion is to speed up vectorized instructions. The fact that 
> in the manual is written to write explicit loops in places where even in 
> Fortran one wouldn't have  used loops anymore is a drawback.
>
> And last, it is psychological, but one is always waiting for the first 
> stable release  (when the 1.0...?)
>
> By the way I'm eager for the 0.4 release, I'll install it and I'l let you 
> know my impressions.
>   
>   
>
>
>

[julia-users] Re: Is it possible to "precompile" during the "__precompile__"?

2015-09-29 Thread Steven G. Johnson


On Tuesday, September 29, 2015 at 2:16:36 AM UTC-4, Sheehan Olver wrote:
>
>
> I'm wondering if its possible to have functions precompiled for specific 
> types during the __precompile__ stage, so that this does not need to be 
> repeated for each using?
>
> Gadfly seems to do this by defining a _precompile_() function, but it's 
> unclear whether that's compiling during the __precompile__ stage, vs during 
> the loading stage.  
>

The `__precompile__` stage just imports the package, and caches anything 
that was compiled during a normal `import`.  So, if you want to precompile 
certain function signatures, you can just add `precompile(myfunction, 
(argtypes...))` calls in the body of your module.

The SnoopCompile package automates this process to a certain degree, by 
detecting which functions it would be most advantageous to precompile. 


[julia-users] Re: How to create an array of a given type?

2015-09-29 Thread Dongning Guo
I suppressed the display and problem solved!  Thanks!

On Tuesday, September 29, 2015 at 1:15:26 AM UTC-5, Michael Hatherly wrote:
>
> This is probably https://github.com/JunoLab/atom-julia-client/issues/44 — 
> just a display issue with arrays containing #undef. Not sure when someone 
> will have a chance to fix it. Until someone does you'll need to avoid 
> trying to display results containing undefined references. If you’d like to 
> try fix it then a good place to start looking is the 
> https://github.com/JunoLab/Atom.jl/tree/master/src/display folder.
> ​
>
> — Mike
> On Tuesday, 29 September 2015 06:17:11 UTC+2, Dongning Guo wrote:
>>
>> I'd like to create an array of a certain type.  The basic code looks as 
>> follows:
>>
>> Type T
>>   x::Float64
>>   y::Float64
>> end
>> b = Array{T}(3)
>>
>> Basically I wish  b[1], b[2], b[3] to be of the same type T.
>>
>> I'm using Julia within the Atom editor.  The last line of code causes 
>> Julia to hang and produces the following error "Julia Client – Internal 
>> Error
>> UndefRefError: access to undefined reference
>> in yieldto at 
>> /Applications/Julia-0.4.0-rc2.app/Contents/Resources/julia/lib/julia/sys.dylib
>> in wait at 
>> /Applications/Julia-0.4.0-rc2.app/Contents/Resources/julia/lib/julia/sys.dylib
>>  
>> (repeats 3 times)
>> [inlined code] from /Users/me/.julia/v0.4/Lazy/src/dynamic.jl:69
>> in anonymous at /Users/me/.julia/v0.4/Atom/src/eval.jl:44
>> [inlined code] from /Users/me/.julia/v0.4/Atom/src/comm.jl:23
>> in anonymous at task.jl:63
>>
>>
>> The same code works just fine under command line Julia (0.4.0-rc2).
>>
>> Any help will be appreciated, including how to have a clean slate 
>> reinstallation (either Juno or Atom) that just works.
>>
>> I'm relatively new to Julia.  I was using Julia under Juno until Juno 
>> broke down with no fix after I tried.  Reinstallation didn't help (several 
>> other people had similar problems, see 
>> https://groups.google.com/d/topic/julia-users/ntOb9HNm0ac/discussion). 
>>  I then switched to Atom and still has problems.
>>
>>

[julia-users] Re: Displaying images in Jupyter notebook

2015-09-29 Thread Steven G. Johnson
Probably you should file an issue at the Images.jl github page.

(I saw something similar on Alan's machine.  My suspicion was a 
misconfigured ImageMagick, but I was able to get it working just by 
downgrading the Images package to an older version.   So, you might be able 
to do a git bisect and report exactly where the problem started.)


Re: [julia-users] ANN: FileIO.jl, for loading and saving formatted files

2015-09-29 Thread Stefan Karpinski
This is great – I love this kind of thing.

On Tue, Sep 29, 2015 at 3:24 PM, Tim Holy  wrote:

> Simon Danisch and I are pleased to introduce FileIO.jl
> (https://github.com/JuliaIO/FileIO.jl), a package for centralizing the
> handling of formatted files. The package contains utilities for querying
> files/filenames by their extensions and/or magic bytes to deduce their
> content.
> It then allows package authors to "register" loaders/savers for detected
> file
> formats.
>
> Some of the advantages of using FileIO are:
>
> - For users, this may reduce the searching required to answer "is there a
> julia package to load file X?" Now you can just try `load("X.ext")` and see
> what happens---if the loader has been registered, it will try to find the
> right
> package for you.
>
> - For developers, this allows you to decouple your package from others. If
> you
> want to work with images, you will (once I tag a new release of Images.jl)
> no
> longer need to make the Images package a dependency as long as you use
> FileIO.
> In your package code, you can say `load("myimage.png")` and everything
> should
> Just Work.
>
> - For users and developers, this reduces name conflicts among two
> attractive
> function names, `load` and `save`. If everyone imports those names from
> FileIO, there need be no conflicts.
>
> The package is relatively small and is fairly extensively documented.
>
> While there are not a large number of formats currently registered, the
> architecture already shows a lot of promise. For example, it can recognize
> our
> .jld files as HDF5 files, something that escapes the well-tested (and much
> more
> extensive) Unix `file` command.
>
> Best,
> --Tim & Simon
>
>