[julia-users] Re: memory allocation in nested loops

2016-10-04 Thread 'Greg Plowman' via julia-users
Not sure what the best way is, especially on Julia 0.5 where maybe some 
sort of view.

But you could always use an Array of Arrays if you don't really need a true 
multi-dimensional array elsewhere:

d = randn(3,n)
d2 = [ d[:,i]::Vector{Float64} for i=1:n ]
r = randn(3,m)
r2 = [ r[:,i]::Vector{Float64} for i=1:m ]

function nested_loop!(Y::Array{Complex{Float64}},X::Array{Complex{Float64}},
  d::Vector{Vector{Float64}},r::Vector{Vector{Float64}},
n::Int64,m::Int64,f::Int64)
for i_1 = 1:f
for i_3 = 1:n
k = 2*π*i_1*d[i_3]
for i_2 = 1:m
A = exp(im*dot(k,r[i_2]))
Y[i_1,i_2] += X[i_1,i_3] * A
end
end
end
end

 nested_loop!(Y,X,d2,r2,n,m,f)

On V0.4 this has less allocations and runs slightly faster (maybe 2-3x)


On Tuesday, October 4, 2016 at 11:45:53 PM UTC+11, Niccolo' Antonello wrote:

> what is the best way to slice an array without allocating? I thank you!
>
> On Tuesday, October 4, 2016 at 1:20:29 PM UTC+2, Kristoffer Carlsson wrote:
>>
>> The slicing of "r" will also make a new array and thus allocate. 
>
>

[julia-users] Re: memory allocation in nested loops

2016-10-04 Thread 'Greg Plowman' via julia-users
You haven't passed in r as argument to function.
Try nested_loop!(Y,X,d,r,n,m,f)


On Tuesday, October 4, 2016 at 8:05:41 PM UTC+11, Niccolo' Antonello wrote:

> Hi all,
>
> I'm trying to make this code to go faster:
>
> n,m = 100,25
> f = 100
> d = randn(3,n)
> r = randn(3,m)
>
> Y = zeros(Complex{Float64},f,m)
> X = ones(f,n)+im*ones(f,n)
>
> function 
> nested_loop!(Y::Array{Complex{Float64}},X::Array{Complex{Float64}},
>   d::Array{Float64,2},n::Int64,m::Int64,f::Int64)
> for i_1 = 1:f
> for i_3 = 1:n
> k = 2*π*i_1*d[:,i_3]
> for i_2 = 1:m
> A = exp(im*dot(k,r[:,i_2]))
> Y[i_1,i_2] += X[i_1,i_3] * A
> end
> end
> end
> end
>
> @time nested_loop!(Y,X,d,n,m,f)
> Profile.clear_malloc_data()
> Y = zeros(Complex{Float64},f,m)
> @time nested_loop!(Y,X,d,n,m,f)
>
> I see that it is allocating a lot of memory. 
> So I also tried to put k and A on Arrays and preallocate that, but this 
> didn't remove the allocations. 
> Could you please tell me what could be possibly done to make this code 
> faster?
> I thank you! 
>
>

[julia-users] Re: What is the deal with macros that do not return expressions?

2016-10-01 Thread 'Greg Plowman' via julia-users
In your two examples, note the difference *when* the macro argument is 
multiplied by 2.

@ex_timestwo happens at runtime
@n_timestwo happens at macro expansion time

a = 6
@ex_timestwo(a)
will return 12 as expected,

whereas:
a = 6
@n_timestwo(a)
will still result in error because at macro expansion time, it is trying to 
evaluate (:a) * 2



On Saturday, October 1, 2016 at 7:26:50 PM UTC+10, Lyndon White wrote:

> I was generally under the impression that a macro always had the return a 
> `Expr`
> and that doing otherwise would make the compiler yell at me.
>
> Apparently I was wrong, at least about the second part:
> https://github.com/ChrisRackauckas/ParallelDataTransfer.jl/issues/3
>
>
>
> macro ex_timestwo(x)
> :($(esc(x))*2)
> end
>
>
>
> @ex_timestwo(4)
> 8
>
> macroexpand(:(@ex_timestwo(4)))
> :(4 * 2)
>
> @ex_timestwo(a)
> LoadError: UndefVarError: a not defined
>
>
> macroexpand(:(@ex_timestwo(a)))
> :(a * 2)
>
>
>
> ?
>
> VS: not returning an expression, and just doing a thing:
>
> macro n_timestwo(x)
>  x*2
> end
>
>
> @n_timestwo(4)
> 8
>
>
> macroexpand(:(@n_timestwo(4)))
> 8
>
>
> @n_timestwo(a)
> LoadError: MethodError: no method matching *(::Symbol, ::Int64)
>
>
> macroexpand(:(@n_timestwo(a)))
> :($(Expr(:error, MethodError(*,(:a,2)
>
>
>
>
> Is this a feature?
> Or is it just undefined behavior?
>
>

Re: [julia-users] fieldtype() for parameterised types

2016-09-19 Thread 'Greg Plowman' via julia-users
Ah, TypeVars, parameter and bound fields.
Thanks Tim.
This looks like what I'm after. Will investigate.



[julia-users] fieldtype() for parameterised types

2016-09-19 Thread 'Greg Plowman' via julia-users
For a parameterised composite type, I want to distinguish between 
fields defined with parameters and generic fields.

An example is probably best:
 
type Foo{T,N}
a::Array
b::Array{T,N}
end

fieldtype(Foo,:a) returns Array{T,N}
fieldtype(Foo,:b) returns Array{T,N}



And if I use different parameters:

type Bar{S,M}
a::Array
b::Array{S,M}
end

fieldtype(Bar,:a) returns Array{T,N}
fieldtype(Bar,:b) returns Array{S,M}


So if I'm careful to use different parameters to the default for the 
field type (Array in this case), I could perhaps distinguish between 
parameterised/non-parameterised fields.

However, it would be nice if fieldtype returned no parameters for case 
where they are not specified for the field.
i.e. fieldtype(Foo,:a) returns Array

Are there any other suggestions?



[julia-users] Re: Error when assigning values to an object

2016-09-16 Thread 'Greg Plowman' via julia-users
Export PhyNode from MyClass module.


[julia-users] Re: code design question – best ideomatic way to define nested types?

2016-09-15 Thread 'Greg Plowman' via julia-users

Another variation on Chris's @commonfields and Tom's @base & @extend 
<&@extend>:

This is somewhere between example 2 and 3, 
Avoids copy and paste of example 2
Avoids delegating to Foo of example 3 

abstract AbstractFoo

type Foo <: AbstractFoo
bar
baz
end

type Foobar <: AbstractFoo
@splatfields Foo
barbaz
bazbaz
end

Also allows for "multiple" composition/inheritance.

 


Re: [julia-users] code design question – best ideomatic way to define nested types?

2016-09-15 Thread 'Greg Plowman' via julia-users
Bart,
Which one is the FooData solution?
Is this Example 1,2 or 3? Or another solution.



Re: [julia-users] slow for loops because of comparisons

2016-09-08 Thread 'Greg Plowman' via julia-users
The difference is probably simd.

the branch will code will not use simd.

Either of these should eliminate branch and allow simd. 
ak += ss1>ss2
ak += ifelse(ss1>ss2, 1, 0)

Check with @code_llvm, look for section vector.body


 at 5:45:30 AM UTC+10, Dupont wrote:

> What is strange to me is that this is much slower
>
>
> function essai(n, s1, s2)
> a = Vector{Int64}(n)
>
> @inbounds for k = 1:n
> ak = 0
> for ss1 in s1, ss2 in s2
> if ss1 > ss2
> ak += 1
> end
> end
> a[k] = ak
> end
> end
>
>
>

[julia-users] printf format for round-trip Floats

2016-09-02 Thread 'Greg Plowman' via julia-users
Yes, of course I can!
Thanks David.
That is embarrassingly simple.

[julia-users] printf format for round-trip Floats

2016-09-01 Thread 'Greg Plowman' via julia-users
I'm trying to print Float64 with all significant digits, so that 
re-inputting these numbers from output strings will result in exactly 
the same number (is this called a round-trip?)

I understand that Julia uses a grisu algorithm to do this automatically for 
values displayed at REPL and with println().

I'm using @printf to output, so I tried  "%20.17g" as a format specifier 
and this seems to work.

But then I compared this to println() and some numbers vary in the last 
digit (e.g @printf displays 389557.48616130429, whereas println() displays 
389557.4861613043)
So it seems @printf(".17g") sometimes displays an extra unnecessary digit?

Is there another printf format specifer that I can use?

Alternatively, I'm using the following to first convert Float to string, 
and then pass string to @printf:

iobuffer = IOBuffer()
print(iobuffer, x)
str_hits = takebuf_string(iobuffer)
@printf("%20s", x)

but this seems a bit messy.



Re: [julia-users] Re: Adding items into a tuple

2016-08-30 Thread 'Greg Plowman' via julia-users

>
>
> I could use an array, but one product can correspond to different number 
> of bases.
>
> That's why I decided to use tuples.
>
> You could use an Array of (different length) Arrays, similar to Array of 
Tuples.


Another strategy might be to construct a vector of (Product, Base) pairs, 
and then iterate over this vector:

product_bases = Tuple{ProductType, BaseType}[]

for k in Products, b in Bases
if accordance[k,b]
push!(product_bases, (k,b))
end
end

for (k,b) in product_bases
...
end




[julia-users] Re: Why aren't multiple const definitions allowed on a line?

2016-08-22 Thread 'Greg Plowman' via julia-users
global const u = 7, v = 11, w = 13
seems to work.



Re: [julia-users] Fast random access to Dict

2016-08-21 Thread 'Greg Plowman' via julia-users
Thanks Erik.
It turns out that for me, Dict is around 20% faster than sorted vector.
-- Greg
 

On Thursday, August 18, 2016 at 10:11:42 AM UTC+10, Erik Schnetter wrote:

> For the insertion stage, the Dict is likely ideal.
>
> For the second stage, a Dict that is based on hashing is a good data 
> structure. Another data structure worth examining would be a sorted vector 
> of `(Int64, Float64)` pairs (sorted by the `Int64` keys). Interpolation 
> search <https://en.wikipedia.org/wiki/Interpolation_search> can then have 
> a complexity of `O(log log N)`. My personal guess is that a well-optimized 
> Dict (i.e. with both hash function and hash table size "optimized") will be 
> faster if we assume a close to random access, as it will have fewer memory 
> accesses.
>
> Julia's Dict implementation (see base/dict.jl) has constants that you 
> could tune (read: play with). There is also a function `Base.rehash!` that 
> you can call to increase the size of the hash table, which might increase 
> performance by avoiding hash collisions, if you have sufficient memory.
>
> -erik
>
>
> On Wed, Aug 17, 2016 at 7:58 PM, 'Greg Plowman' via julia-users <
> julia...@googlegroups.com > wrote:
>
>> I need fast random access to a Dict{Int64,Float64}.
>> My application has a first phase in which the Dict is populated, and a 
>> second phase where it accessed randomly (with no further additions or 
>> deletions).
>> There are about 50,000 to 100,000 entries, with keys in the range 10^9 to 
>> 10^14.
>>
>> Firstly is a Dict the most optimal data structure? (presumably Dict is 
>> faster than SparseVector for random lookup?)
>>
>> Secondly, is there any "preconditioning" I can do after creating the Dict 
>> in phase 1 but before phase 2, to optimise the Dict for random access. 
>> (e.g. sizehint, sorting keys)
>>
>> I do appreciate access might already be fast and optimal and I should 
>> just benchmark, but I'm just looking for some theoretical 
>> insight before benchmarking.
>>
>>
>>
>
>
> -- 
> Erik Schnetter <schn...@gmail.com > 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


[julia-users] Re: Composition of setindex!

2016-08-21 Thread 'Greg Plowman' via julia-users
But also note that a[1:3] = 1.0 will modify a (rather than a copy)

On Sunday, August 21, 2016 at 6:34:28 PM UTC+10, Kristoffer Carlsson wrote:
>
> Range indexing produces copies yes. Either just write the loops or use 
> "view" or "sub" to refer to the original memory.



[julia-users] Fast random access to Dict

2016-08-17 Thread 'Greg Plowman' via julia-users
I need fast random access to a Dict{Int64,Float64}.
My application has a first phase in which the Dict is populated, and a 
second phase where it accessed randomly (with no further additions or 
deletions).
There are about 50,000 to 100,000 entries, with keys in the range 10^9 to 
10^14.

Firstly is a Dict the most optimal data structure? (presumably Dict is 
faster than SparseVector for random lookup?)

Secondly, is there any "preconditioning" I can do after creating the Dict 
in phase 1 but before phase 2, to optimise the Dict for random access. 
(e.g. sizehint, sorting keys)

I do appreciate access might already be fast and optimal and I should just 
benchmark, but I'm just looking for some theoretical 
insight before benchmarking.




[julia-users] Re: How to Manipulate each character in of a string using a for loop in Julia ?

2016-08-17 Thread 'Greg Plowman' via julia-users
I think Jacob meant:

for char in a
   write(io, char + 1)
end


[julia-users] Re: Still confused about how to use pmap with sharedarrays in Julia

2016-08-11 Thread 'Greg Plowman' via julia-users
pmap inside a function also seems to work:

@everywhere f(A,i) = (println("A[$i] = $(A[i])+1"); A[i] += 1)
wrapped_pmap(A) = pmap(i -> f(A,i), 1:length(A))

S = SharedArray(Int,10)
S[:] = 1:length(S)

output1 = pmap(i -> f(S,i), 1:length(S)) #error
output2 = wrapped_pmap(S) #seems to work


I'm quite confused about why second version with pmap wrapped in a function 
works.
Perhaps someone can explain?
 



[julia-users] Re: Still confused about how to use pmap with sharedarrays in Julia

2016-08-11 Thread 'Greg Plowman' via julia-users

>
>
> The fact that meaning of  "," changes depending on what is in placed in 
> [a,b,c] seems have been the source of the issue.
>
> (As an aside, this inconsistency seems to me to be somewhat of a 
> less-than-desireable feature in Julia.)
>


I think this is changing in v0.5. See 
https://github.com/JuliaLang/julia/blob/master/NEWS.md#language-changes-1

Square brackets and commas (e.g. [x, y]) no longer concatenate arrays, and 
always simply construct a vector of the provided values. If x and y are 
arrays, [x, y] will be an array of arrays (*#3737* 
, *#2488* 
, *#8599* 
).



[julia-users] Re: Still confused about how to use pmap with sharedarrays in Julia

2016-08-10 Thread 'Greg Plowman' via julia-users
I have also found the combination of shared arrays, anonymous functions and 
parallel constructs confusing.

StackOverflow question helped me Shared array usage in Julia 


In essence, "although the underlying data is shared to all workers, the 
declaration is not. You will still need to pass in the reference to [the 
shared array]"

S = SharedArray(Int,3,4)
S[:] = 1:length(S)
@everywhere f(S,i) = (println("S[$i] = $(S[i])"); S[i])

output1 = pmap(i -> f(S,i), 1:length(S))# error
output2 = pmap((S,i) -> f(S,i), fill(S,length(S)), 1:length(S))
output3 = pmap(f, fill(S,length(S)), 1:length(S))


In seems then, in version1, the reference S is local to the worker, but S 
is not defined on the worker -> error.
In versions 2&3 the local S is passed as argument to worker and all works 
as expected.



[julia-users] Re: Unexpected Performance Behaviour

2016-08-01 Thread 'Greg Plowman' via julia-users
I get timing/allocations the other way around. (test1, hard-coded version 
is fast without allocation)
@code_warntype for test2 shows type-instability for s (because return type 
cannot be inferred for f1)


On Tuesday, August 2, 2016 at 2:33:24 PM UTC+10, Christoph Ortner wrote:

> Below are two tests, in the first a simple polynomial is "hard-coded", in 
> the second it is passed as a function. I would expect the two to be 
> equivalent, but the second case is significantly faster. Can anybody 
> explain what is going on?  @code_warntype doesn't show anything that would 
> explain it? 
>
> function test1(N)
>
> r = 0.234; s = 0.0
> for n = 1:N
> s += r^3 + r^5
> end
> return s
> end
>
>
> function test2(N, f1)
> r = 0.234; s = 0.0
> for n = 1:N
> s += f1(r)
> end
> return s
> end
>
>
> g1(r) = r^3 + r^5
>
>
> test1(10)
> test2(10, g1)
>
>
> println("Test1: hard-coded functions")
> @time test1(1_000_000)
> @time test1(1_000_000)
> @time test1(1_000_000)
>
>
> println("Test2: pass functions")
> @time test2(1_000_000, g1)
> @time test2(1_000_000, g1)
> @time test2(1_000_000, g1)
>
>
>
>
> # $ julia5 weird_test2.jl
> # Test1: hard-coded functions
> #   0.086683 seconds (4.00 M allocations: 61.043 MB, 50.75% gc time)
> #   0.142487 seconds (4.00 M allocations: 61.035 MB, 76.91% gc time)
> #   0.025388 seconds (4.00 M allocations: 61.035 MB, 4.28% gc time)
> # Test2: pass functions
> #   0.000912 seconds (5 allocations: 176 bytes)
> #   0.000860 seconds (5 allocations: 176 bytes)
> #   0.000846 seconds (5 allocations: 176 bytes)
>
>
>
>
>
>

[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread 'Greg Plowman' via julia-users
and also compare (note the @sync)

@time @sync @parallel for i in 1:10
sleep(1)
end

Also note that using reduction with @parallel will also wait:
 z = @parallel (*) for i = 1:n
 A
 end


On Friday, July 22, 2016 at 3:11:15 AM UTC+10, Kristoffer Carlsson wrote:

>
>
> julia> @time for i in 1:10
>sleep(1)
>end
>  10.054067 seconds (60 allocations: 3.594 KB)
>
>
> julia> @time @parallel for i in 1:10
>sleep(1)
>end
>   0.195556 seconds (28.91 k allocations: 1.302 MB)
> 1-element Array{Future,1}:
>  Future(1,1,8,#NULL)
>
>
>
> On Thursday, July 21, 2016 at 6:00:47 PM UTC+2, Ferran Mazzanti wrote:
>>
>> Hi,
>>
>> mostly showing my astonishment, but I can even understand the figures in 
>> this stupid parallelization code
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> for i in 1:10
>> z *= A
>> end
>> toc()
>> A
>>
>> produces
>>
>> elapsed time: 105.458639263 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0 1.0001
>>  1.0002  1.0003
>>
>>
>>
>> But then add @parallel in the for loop
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> @parallel for i in 1:10
>> z *= A
>> end
>> toc()
>> A
>>
>> and get 
>>
>> elapsed time: 0.008912282 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0 1.0001
>>  1.0002  1.0003
>>
>>
>> look at the elapsed time differences! And I'm running this on my Xeon 
>> desktop, not even a cluster
>> Of course A-B reports
>>
>> 2x2 Array{Float64,2}:
>>  0.0  0.0
>>  0.0  0.0
>>
>>
>> So is this what one should expect from this kind of simple 
>> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>>
>> Best,
>>
>> Ferran.
>>
>>
>>

[julia-users] Re: Clarification about @parallel (op) for loop

2016-07-20 Thread 'Greg Plowman' via julia-users

>
>
> does every worker compute func(i) and return it to the calling process, 
> which then reduces it on the fly? 
>
No.
 

> Or does every worker apply the reduction operator for its chunk and then 
> return it to the calling process?
>
Yes.
 

The documentation 
>  says 
> that "In case of @parallel for, the final reduction is done on the calling 
> process."  
>
 Does this mean that func(i) *for every i* is returned to the calling 
process? 
No.
 

> On the other hand, from what I understand from this implementation of 
> preduce 
> ,
>  
> line 1723 seems to indicate that the reduction happens on the worker 
> process itself, and only the *final* reduction is carried out on the 
> calling process.
>
 Yes. That is what is meant by "the *final* reduction is done on the 
calling process". It's the reduction of the single, already-reduced results 
from each worker.


Also, the documentation mentions: "In contrast, @parallel for can handle 
> situations where each iteration is tiny, perhaps merely summing two 
> numbers." However, my function func() is nowhere near this simple. Should I 
> still be using @parallel? Or try to go for pmap() with my own written 
> version of a pmap_reduce()?
>

@parallel *statically* splits the work among workers in *batches*. 
Statically means each worker runs the same number of loops.
In batches means communication is minimised.

pmap dynamically splits the work among workers (dynamic load balancing)
Each worker is given the next task only when it has completed the previous 
task.
pmap is useful for uneven workloads and/or uneven processors.
It is less ideal for small workloads because of the communication per 
iteration.

Of course you could custom code a sort of hybrid, where pmap dishes out the 
work to workers in batches.



[julia-users] Re: Array of vectors in type definition

2016-07-18 Thread 'Greg Plowman' via julia-users
Although Vector{Array{Float64}} works, be aware that it is specifying a 
vector of the abstract type Array{Float64} (any dimension allowed).
In general this is not recommended. It is better to specify a concrete type 
where possible.
In your case, this probably means Vector{Vector{Float64}}.


On Tuesday, July 19, 2016 at 8:12:36 AM UTC+10, Ferran Mazzanti wrote:

> Hi Mauro,
> your solution seems to work... though I do not understand exactly why :)
> Even Vector{Array{Float64}} works.
> Thanks for your kind help :)
> Ferran.
>


[julia-users] Re: Calculating pi using parallel computing

2016-07-18 Thread 'Greg Plowman' via julia-users
Maybe something like this:

See http://docs.julialang.org/en/release-0.4/manual/parallel-computing/ 
(particularly 
section Parallel Map and Loops)

@everywhere f(x) = 4/(1+(x*x))

function calc_pi(n)
a,b = 0,1
dx = (b-a)/n
sumfx = @parallel (+) for x in a:dx:b
f(x)
end
return sumfx * dx
end

function test_pi()
for e in 1:9
n = 10^e
pi_approx = calc_pi(n)
@printf("%12d %20.15f %20.15f\n", n, pi_approx, pi_approx - pi)
end
end



On Monday, July 18, 2016 at 2:50:06 PM UTC+10, Arundhyoti Sarkar wrote:

> Hi Guys,
>
> I am trying to run Julia on a cluster for the first time. Essentially I 
> want to calculate the value of Pi by numerical integration. integration of 
> 4/(1+x*x) from 0 to 1 = pi 
> I have written this piece of code but I am unable to parallelize it 
> properly. 
>
> ```
> n=100 # larger the value, larger the number of pieces for integration, 
> greater the precision.
> @everywhere f(x) = 4/(1+(x*x))
> a,b = 0,1
> delta = (b-a)/n
> xs = a+ (0:n-1) * delta
> pieces = pmap(f,xs)
> r1 = @spawn cpi_P.calsum(pieces[1:5])
> r2 = @spawn cpi_P.calsum(pieces[6:end])
>
> println("Process 1 : ",fetch(r1)* delta)
> println("Process 2 : ",fetch(r2)* delta)
> print("PI = ")
> ```
> cpi_P.jl is a module containing the following
> ```
> module cpi_P
> export calsum
>
>   function calsum(fx)
> sum = 0
> for i in 1:size(fx,1)
>sum = sum + fx[i]
> end
> return sum
>end
>
> end
> ```
> The above code works when the number of processes are 2. I want to 
> generalize this to N processes where I can assign N set of pieces for 
> integration using @parallel. Problem is that calsum is dependent on the 
> limits and the cpi_P module needs to accept fx and the limits as inputs. 
>
> I am a novice. Any kind of help will e deeply appreciated. Thanks!
>
>

Re: [julia-users] Trying to understand parametric types as function arguments

2016-07-15 Thread 'Greg Plowman' via julia-users

Yes, thanks for that Tim.
I can see that now.
That's a nice workaround.


Re: [julia-users] Trying to understand parametric types as function arguments

2016-07-15 Thread 'Greg Plowman' via julia-users

Yes, thanks for that Tim.
I can see that now.
That's a nice workaround.


Re: [julia-users] Trying to understand parametric types as function arguments

2016-07-14 Thread 'Greg Plowman' via julia-users

>
> What about the following?
>
 
myfunc(vec::Vector{Dict{ASCIIString}}) = 1
a = [ Dict("a"=>1), Dict("b"=>1.0) ]
b = [ Dict(1=>:hello), Dict(2=>:world) ]

 
julia> myfunc(a)
1

julia> myfunc(b)
ERROR: MethodError: `myfunc` has no method matching 
myfunc(::Array{Dict{Int64,Symbol},1})



[julia-users] Parallel computing: SharedArrays not updating on cluster

2016-06-21 Thread 'Greg Plowman' via julia-users
Yes.
AFAIK,
Shared arrays are shared across multiple processes on the same machine.
Distributed arrays can be distributed across different machines.

[julia-users] Re: Standard wrapper for global variable optimization hack?

2016-06-13 Thread 'Greg Plowman' via julia-users
I have a feeling this is a stupid question, but here it is anyway:

Why do you need a wrapper? Why not just declare the object const directly?

const x = 0.0

function inc_global()
x += 1
end


Re: [julia-users] Why does adding type assertion to field slow down this example?

2016-06-09 Thread 'Greg Plowman' via julia-users

I know this is not replying to your original question but it might be 
relevant or not.

There was a suggestion for speeding up dispatch of abstract types in 
another julia-users discussion: dispatch slowdown when iterating over array 
with abstract values 
.
In your case, the idea was to use a manual case/select statement:

if isa(y.x, UInt64)
s += sqrt(y.x::UInt64)
elseif isa(y.x, UInt32)
s += sqrt(y.x::UInt32)
elseif
...
end



I wrote a (rather crude) macro to generate similar code. You could use it 
in your case as follows:
 
function flog2(y)
s = 0.0
for i=1:100
@DynamicDispatch y.x::Unsigned begin
s += sqrt(y.x)
end
end
s
end


I guess you can read this as "for all T in Unsigned generate if (isa(y.x, 
T) s += sqrt(y.x::T)"

flog2 should run faster and without allocation.

Of course this is not super clean, but if performance is a priority it may 
be of use.


And here's the macro.
Not fully tested or recommended, but you may be able to model something 
from it.

isdefined(:__precompile__) && __precompile__()

module DynamicDispatchUtilities
export  ConcreteSubtypes,
TransformExpr,
@DynamicDispatch

function ConcreteSubtypes(T::DataType)
types = DataType[]
if isleaftype(T)
push!(types, T)
else
for S in subtypes(T)
if S != T   # guard against infinite recursion, Any is a 
subtype of itself
append!(types, ConcreteSubtypes(S))
end
end
end
return types
end

function TransformExpr(x, e1, e2)
y = copy(x)
if x == e1
y = e2
elseif isa(x, Expr)
for i = 1:length(x.args)
y.args[i] = TransformExpr(x.args[i], e1, e2)
end
end
return y
end

macro DynamicDispatch(var, expr)
# var should be of form e::A, where e is a symbol/reference and A is an 
abstract type
@assert isa(var, Expr) && var.head == :(::)
e = var.args[1]
A = var.args[2]
concrete_types = ConcreteSubtypes(eval(current_module(), A))
C = concrete_types[end]
case_block = TransformExpr(expr, e, :($e::$C))
for i = length(concrete_types)-1 : -1 : 1
C = concrete_types[i]
case_block = Expr(:if, :(isa($e,$C)), TransformExpr(expr, e, 
:($e::$C)), case_block)
end
return esc(case_block)
end
end





[julia-users] Re: pmap on functions with variable #'s of arguments

2016-06-02 Thread 'Greg Plowman' via julia-users
Of course I meant:
Fprime(a) = F(a,b0,c0)


[julia-users] Re: pmap on functions with variable #'s of arguments

2016-06-02 Thread 'Greg Plowman' via julia-users

>
>
> function Fprime(a; b0 = b, c0 = c)
>F(a, b0, c0) # treating b and c above as fixed
> end 
>

Where did you define b and c? Did you define on all workers?
 

>
> (i) this does not solve my problem when the a_i's are different sizes and 
> can't be put into one array 
>

Not sure what you mean by this.

In any case, either of these 2 methods work for me:

@everywhere begin
F(a,b,c) = a+b+c

global const b0 = 20
global const c0 = 200
Fprime(a) = a+b0+c0
end

a = [1,2,3,4,5]
b = fill(10, length(a))
c = fill(100, length(a))

pmap(F,a,b,c)
pmap(Fprime,a)




[julia-users] Re: Importing Functions on Different Workers

2016-06-01 Thread 'Greg Plowman' via julia-users
I find that putting everything required on workers into a module is the way 
to go.
Then just use using Module (after adding workers)

This works for me (v0.4.5):

ProjectModule.jl:
module ProjectModule
using DataFrames
include("function1.jl")
export function1
end


function1.jl:
function function1(input::DataFrame)
# do something to input
println("function1 here")
end

Now test:
addprocs()
using DataFrames
using ProjectModule
df = DataFrame(A = 1:4, B = ["M", "F", "F", "M"])
pmap(function1, fill(df,10))



On Thursday, June 2, 2016 at 8:24:52 AM UTC+10, ABB wrote:

> Hello - 
>
> I have the following problem: I would like to find a good way to import a 
> collection of user-defined functions across several workers.  Some of my 
> functions are defined on DataFrames, but "using DataFrames" is not getting 
> me anywhere on the other workers.
>
> I think the problem I am running into may result from some combination of 
> the rules of scope and the command "using".  
>
> Alternatively, given what I want to do, maybe running this with "julia -p 
> N" is not the best way to make use of N workers in the way I want to.  
>
> I open several workers: "julia -p 3"
>
> "using DataFrames" - this brings DataFrames into the 'main' worker's scope 
> (I think), but not into the scope of subordinate workers.
>
> "using ProjectModule" - I am trying to load a module across all workers 
> which contains several functions I have written (maybe this is not the best 
> way to accomplish this task?)
>
> This error is returned:
>
> LoadError: LoadError: UndefVarError: DataFrame not defined
>
> ProjectModule looks something like
>
> module ProjectModule
>include("function1.jl")
>
>export function1
> end
>
> where function1 is defined as
>
> function1(input::DataFrame)
>#do something to input
> end 
>
> I have tried a few things:
>
> - Running "@everywhere using DataFrames" from within the main worker (this 
> has worked once or twice - that is, I can then use function1 on a different 
> worker - but it isn't consistent)
>
> - Opening the workers at the outset using julia -p N -L 
> ProjectModule.jl... (repeated N times)  I get: "LoadError: UndefVarError: 
> DataFrames not defined"
>
> - I also put "using DataFrames" into the ProjectModule.jl file.  The 
> program definitely hated that.  (Specifically: I was warned that I was 
> "overwriting DataFrames".)
>
> Is there a better way to load both the DataFrames package and the 
> functions I have written across a couple of workers?  
>
> Thanks!
>
> ABB
>


[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread 'Greg Plowman' via julia-users

>
>
> It looks like that SO answer is moving data into the global scope of each 
> worker. It is probably worth experimenting with but I'd be worried about 
> performance implications of non-const global variables. It's probably the 
> case that this is still a win for my use case though. Thanks for the link.
>


Why not declare input vector as const?


I have a similar requirement for a simulation. I found it convenient 
to wrap everything that is required on workers into a module.

module SphericalHarmonicTransforms
export spherical_harmonic_transforms

const input = Vector{Float64}(10^7)
...

function spherical_harmonic_transforms(idx)
coefficients = Vector{Complex128}(10^6)
... # reference input vector here
return coefficients
end
end



Then to propagate to all workers, just write using 
SphericalHarmonicTransforms

function sim()
idx = 1
limit = 1
nextidx() = (myidx = idx; idx += 1; myidx)
@sync for worker in workers()
@async while true
myidx = nextidx()
myidx ≤ limit || break
coefficients = remotecall_fetch(worker, 
spherical_harmonic_transforms, myidx)
write_results_to_disk(coefficients)
end
end
end

addprocs()
using SphericalHarmonicTransforms
sim()




Re: [julia-users] Re: calling sort on a range with rev=true

2016-05-07 Thread 'Greg Plowman' via julia-users

I think the problem here is that implementing sort(r::Range; 
rev::Bool=false), i.e. for keyword rev only, is that it would mean there is 
no definition for the other keyword arguments. Alternatively, if the other 
keywords were implemented for Range, then the method would not be 
type-stable. As Steven pointed out, sort(-5:5, by=abs) could not be 
represented by a Range.

Admittedly I was initially confused about dispatch with keyword 
arguments, and my understanding is still quite tenuous.
If what I'm saying makes sense then I don't think there is a case for 
filing an issue/PR?
 

On Friday, May 6, 2016 at 2:36:27 AM UTC+10, Milan Bouchet-Valat wrote:

> Le dimanche 01 mai 2016 à 19:11 -0700, 'Greg Plowman' via julia-users a 
> écrit : 
> > 
> > Extending/overwriting sort in range.jl (line 686) 
> > 
> > sort(r::Range) = issorted(r) ? r : reverse(r) 
> > 
> > with the following worked for me. 
> > 
> > function Base.sort(r::Range; rev::Bool=false) 
> > if rev 
> > issorted(r) ? reverse(r) : r 
> > else 
> > issorted(r) ? r : reverse(r) 
> > end 
> > end 
> Makes sense, please file an issue or a pull request (if the latter, be 
> sure to also add tests to prevent regressions). 
>
>
> Regards 
>


Re: [julia-users] calling sort on a range with rev=true

2016-05-04 Thread 'Greg Plowman' via julia-users
Thanks Steven for fixing this.

(If you pass them keyword arguments, then they pretty much have to return a 
> full array, rather than try to be clever and return another range, for 
> type-stability.  e.g. sort(-5:5, by=abs) can't be expressed in terms of a 
> range.)


At first I didn't understand this, until I came around to understanding 
that keyword arguments don't participate in dispatch.
So it's not possible to specialise on a *particular* keyword. It's all or 
nothing.
I guess this make keyword arguments slightly less useful in some 
circumstances.

For my purpose, I defined my own reverse sort so that I could specialise:

sortrev(v::AbstractVector) = sort(v, rev=true)
sortrev(r::Range) = issorted(r) ? reverse(r) : r



[julia-users] Re: calling sort on a range with rev=true

2016-05-01 Thread 'Greg Plowman' via julia-users

Extending/overwriting sort in range.jl (line 686)

sort(r::Range) = issorted(r) ? r : reverse(r)

with the following worked for me.

function Base.sort(r::Range; rev::Bool=false)
if rev
issorted(r) ? reverse(r) : r
else
issorted(r) ? r : reverse(r)
end
end



[julia-users] calling sort on a range with rev=true

2016-05-01 Thread 'Greg Plowman' via julia-users

There have been discussions about whether a range can substitute in most 
cases where a vector is required.

julia> sort(1:3)
1:3

julia> sort(1:3, rev=true)
ERROR: indexed assignment not defined for UnitRange{Int64}
 in sort! at sort.jl:222
 in sort! at sort.jl:292
 in sort! at sort.jl:402
 in sort at sort.jl:413

julia> sort(collect(1:3), rev=true)
3-element Array{Int64,1}:
 3
 2
 1



I guess it's unclear whether sort(1:3, rev=true) should return 3:-1:1 or 
[3,2,1] or be an error as it currently is.

Any thoughts?



[julia-users] Re: performance of two different array allocations

2016-04-21 Thread 'Greg Plowman' via julia-users
It seems concatenating, especially vectors, is quite common.
And it would be nice if concatenating with mixed scalar and vector 
arguments was type-stable.

I did some playing around and although I don't fully understand the impact, 
I [think I] was able to get a type-stable version of vcat with 
vector/scalar arguments.

Essentially widen signatures of vcat to allow mixed arguments and introduce 
full for Number:

abstractarray.jl line 742
vcat(V::AbstractVector...) = typed_vcat(promote_eltype(V...), V...)
vcat(V::Union{Number,AbstractVector}...) = typed_vcat(promote_eltype(V...), 
V...)

abstractarray.jl line 745
function typed_vcat(T::Type, V::AbstractVector...)
function typed_vcat(T::Type, V::Union{Number,AbstractVector}...)

Base.full(x::Number) = [x]

Here full acts as a sort of "promotion to vector".
This might seem a somewhat twisted use of full, so in typed_cat, I 
tried substituting:
a = similar(isa(V[1], AbstractVector) ? full(V[1]) : [V[1]], T, n)
instead of
a = similar(full(V[1]), T, n)
but it didn't work.


I don't really know how to try this out directly, so I overwrote methods:

function Base.vcat(V::Union{Number,AbstractVector}...)
#println("my vcat")
Base.typed_vcat(Base.promote_eltype(V...), V...)
end

function Base.typed_vcat(T::Type, V::Union{Number,AbstractVector}...)
#println("my typed_vcat")
n::Int = 0
for Vk in V
n += length(Vk)
end
a = similar(full(V[1]), T, n)
pos = 1
for k=1:length(V)
Vk = V[k]
p1 = pos+length(Vk)-1
a[pos:p1] = Vk
pos = p1+1
end
a
end

function Base.full(x::Number)
#println("my full")
[x]
end



On Friday, April 22, 2016 at 3:32:27 AM UTC+10, Jeremy Kozdon wrote:

> In a class I'm teaching the students are using Julia and I couldn't for 
> the life of me figure out why one of my students codes was allocating a lot 
> of memory.
>
> I finally paired it down the following example that I don't understand:
>
> function version1(N)
>   b = [1;zeros(N-1)]
>   println(typeof(b))
>   for k = 1:N
> for j = 1:N
>   b[j] += k
> end
>   end
> end
>
>
> function version2(N)
>   b = zeros(N)
>   b[1] = 1
>   println(typeof(b))
>   for k = 1:N
> for j = 1:N
>   b[j] += k
> end
>   end
> end
>
> N = 1000
> println("compiling..")
> @time version1(N)
> version2(N)
> println()
> println()
>
> println("Version 1")
> @time version1(N)
> println()
>
> println("Version 2")
> @time version2(N)
>
> The output of this (without the compiling output) in v0.4.5 is:
>
> Version 1
> Array{Float64,1}
>   0.092473 seconds (3.47 M allocations: 52.920 MB, 3.24% gc time)
>
> Version 2
> Array{Float64,1}
>   0.001195 seconds (27 allocations: 8.828 KB)
>
> Both version produce the same type for Array b, but in version1 every time 
> through the loop allocation happens and in the version2 the only allocation 
> is of the initial array.
>
> I've not run into this one before (because I would never do version1), but 
> as all of us that teach know students will always surprise you with their 
> approaches.
>
> Any help understanding what's going on would be appreciated.
>


[julia-users] Re: a'*b and svds for custom operators

2016-04-20 Thread 'Greg Plowman' via julia-users

>
>
> 3. Any other methods I should implement for my operator?
>
>
 http://docs.julialang.org/en/release-0.4/manual/interfaces/#abstract-arrays



[julia-users] Re: a'*b and svds for custom operators

2016-04-20 Thread 'Greg Plowman' via julia-users


On Thursday, April 21, 2016 at 11:17:32 AM UTC+10, Madeleine Udell wrote:
>
> Hi, 
>
> I'm trying to define my own custom operator type that will allow me to 
> implement my own * and '* operations for use inside eigenvalue or singular 
> value routines like eigs and svds. But I'm having trouble making this work.
>
> Here's a simple example reimplementing matrix multiplication, with 
> questions sprinkled as comments in the code.
>
> ---
>
> import Base: *, Ac_mul_b, size
> # Ac_mul_b doesn't seem to live in base; where does it live?
>
> It's Ac_mul_B (capital B)
 

> type MyOperator
> A::Array{Float64}
> end
>

type can subtype from AbstractMatrix
can also parameterise to use concrete Array

type MyOperator{T} <: AbstractMatrix{T}
A::Matrix{T}
end
 

>
> o = MyOperator(randn(5,4))
>
> *(o::MyOperator, v::AbstractVector) = (println("using custom * method"); 
> o.A*v)
> o*rand(4) # this works
>
> Ac_mul_b(o::MyOperator, v::AbstractVector) = (println("using custom '* 
> method"); o.A'*b)
>

 Ac_mul_B(o::MyOperator, v::AbstractVector) = (println("using custom '* 
method"); Ac_mul_B(o.A, v))

o'*rand(5) # this doesn't work; instead, it calls ctranspose(o)*v, which is 
> not what I want
>
> size(o::MyOperator) = size(o.A)
> size(o::MyOperator, i::Int) = size(o)[i]
>
> svds(o, nsv=1)
> # this doesn't work; svds requires a `zero` method for the parametrized 
> type of my operator. If I know the type will always be Float64, what's the 
> easiest way to tell this to svds?
>

you get zero() for free when MyOperator is a subtype of AbstractMatrix
 

>
> ---
>
> Questions, collected:
> 1. Ac_mul_b doesn't seem to live in base; where does it live? Or should I 
> be extending a different function? This causes o'*v to call 
> ctranspose(o)*v, which is not what I want. (The default ctranspose for new 
> types seems to be the identity.)
> 2. svds requires a `zero` method for the parametrized type of my operator. 
> If I know the type will always be Float64, what's the easiest way to tell 
> this to svds? Here's the eigs/svds 
>  
> code, but I haven't been able to track down enough information on 
> AbstractMatrices and parametrized types to make this work.
> 3. Any other methods I should implement for my operator?
>
> Thanks!
> Madeleine
>


import Base: *, Ac_mul_B, size, show

type MyOperator{T} <: AbstractMatrix{T}
A::Matrix{T}
end

*(o::MyOperator, v::AbstractVector) = (println("using custom * method"); 
o.A*v)
Ac_mul_B(o::MyOperator, v::AbstractVector) = (println("using custom '* 
method"); Ac_mul_B(o.A, v))
size(o::MyOperator) = size(o.A)

o = MyOperator(randn(5,4))
o * rand(4)
o' * rand(5)
svds(o, nsv=1)



[julia-users] Re: Starting Julia with Julia -p 2 provides 1 worker

2016-04-20 Thread 'Greg Plowman' via julia-users
Sorry, I can't really help you with command line julia -p 2
But what happens when you call addprocs() from REPL?
Also, what is the value of CPU_CORES (typed at REPL)?



[julia-users] Re: Starting Julia with Julia -p 2 provides 1 worker

2016-04-19 Thread 'Greg Plowman' via julia-users


julia -p 2 will start Julia with 2 processes.
nprocs() will return 2
nworkers() will return 1 (1 less than nprocs()) 

http://docs.julialang.org/en/release-0.4/stdlib/parallel/?highlight=nworkers#Base.nworkers


On Tuesday, April 19, 2016 at 6:58:30 PM UTC+10, Iacopo Poli wrote:

> Hi,
>
> I'm trying to start Julia with more than one worker, but if I type in the 
> terminal for example "julia -p 2", then in the REPL nworkers() returns 1.
> I have version *0.5.0-dev+3488 *and a Intel Core i5 (Macbook Pro Mid 
> 2012).  
>
> Running system_profiler:
>
> "system_profiler SPHardwareDataType
>
> Hardware:
>
>  Hardware Overview:
>
>
>   Model Name: MacBook Pro
>
>   Model Identifier: MacBookPro9,2
>
>   ...
>
>   Total Number of Cores: 2
>
>   ...
> "
>
> So I should be able to start Julia with two workers... Any guess why this 
> isn't working?
>


[julia-users] Re: Starting Julia with Julia -p 2 provides 1 worker

2016-04-19 Thread 'Greg Plowman' via julia-users
julia -p 2 will start Julia with 2 processes.
nprocs() will return 2
nworkers() will return 2 (1 less than nprocs()) 

http://docs.julialang.org/en/release-0.4/stdlib/parallel/?highlight=nworkers#Base.nworkers


On Tuesday, April 19, 2016 at 6:58:30 PM UTC+10, Iacopo Poli wrote:

> Hi,
>
> I'm trying to start Julia with more than one worker, but if I type in the 
> terminal for example "julia -p 2", then in the REPL nworkers() returns 1.
> I have version *0.5.0-dev+3488 *and a Intel Core i5 (Macbook Pro Mid 
> 2012).  
>
> Running system_profiler:
>
> "system_profiler SPHardwareDataType
>
> Hardware:
>
>  Hardware Overview:
>
>
>   Model Name: MacBook Pro
>
>   Model Identifier: MacBookPro9,2
>
>   ...
>
>   Total Number of Cores: 2
>
>   ...
> "
>
> So I should be able to start Julia with two workers... Any guess why this 
> isn't working?
>


[julia-users] @async tasks not yielding

2016-04-18 Thread 'Greg Plowman' via julia-users
I 'm somewhat confused about when @async tasks switch.
My understanding was tasks would yield on a blocking operation such as IO.

On 0.4.1, I started multiple workers on remote hosts asynchronously, and 
these started as I expected.
Presumably the tasks launching the workers yielded when the command to 
start the worker was issued:
 io, pobj = open(detach(cmd), "r")

On 0.4.5, workers are started one host at a time (all workers on a host are 
started before moving on to the next host), suggesting the launching tasks 
are not yielding.
Is this a reasonable assumption?

At this stage, I assumed something changed between 0.4.1 and 0.4.5 which 
affected task yielding/switching.

I was further confused when I tried setting up a minimal example using 
println(), which I thought would yield:

function Test()
@sync begin
for task = 1:5
@async begin
for job = 1:5
println("task.job = $(task).$(job)")
#sleep(5)
#yield()
end
end
end
end
end


On both 0.4.1 and 0.4.5 Test() produces output in task order, suggesting no 
yielding.
Explicit call to sleep or yield works as expected, producing output in job 
order.

In any case, it seems something has changed between 0.4.1 and 0.4.5 wrt 
yielding when issuing commands?
Should detach(cmd), where cmd starts a worker on remote host, yield?




[julia-users] Re: Warning on operations mixing BigFloat and Float64

2016-04-18 Thread 'Greg Plowman' via julia-users
Perhaps you could overwrite the convert function to include a warning.
(Maybe just temporarily until you discover all the conversions)
 
As an example, this is a quick hack, modified from definition in mpfr.jl

@eval begin
function Base.convert(::Type{BigFloat}, x::Float64)
println("*** Warning: converting Float64 to BigFloat")
z = BigFloat()
ccall(($(string(:mpfr_set_,:d)), :libmpfr), Int32, (Ptr{BigFloat}, 
Float64, Int32), , x, Base.MPFR.ROUNDING_MODE[end])
return z
end
end


julia> BigFloat(3.0) + 2.5
*** Warning: converting Float64 to BigFloat
5.50




On Tuesday, April 19, 2016 at 1:50:52 AM UTC+10, Paweł Biernat wrote:

> I know about the promotion, but this is precisely what I want to avoid.  
> It might happen that there are hard-coded Float64 constants somewhere in 
> the code and I would like to locate them and replace with higher precision 
> ones.  I could probably just do a direct search in the source code to 
> locate these spots but I still might miss some of them.  I guess it would 
> be safer to just print a warning when an operations mixing both types 
> occurs and then eliminate these spots case by case.
>
> Maybe defining my own AbstractFloat type with a minimal set of operations 
> and passing it as an argument instead of BigFloat would be a better 
> solution.  Then if I don't implement the operations involving Float64 I 
> will get an error every time the mixing occurs.
>
> W dniu poniedziałek, 18 kwietnia 2016 17:28:14 UTC+2 użytkownik Tomas 
> Lycken napisał:
>>
>> Adding a BigFloat and a Float64 should automatically promote both to 
>> BigFloats, avoiding precision loss for you.
>>
>> julia> BigFloat(2.9) + 0.3
>> 3.199900079927783735911361873149871826171875
>>
>> Do you have a case where this doesn’t happen?
>>
>> // T
>>
>> On Monday, April 18, 2016 at 4:32:52 PM UTC+2, Paweł Biernat wrote:
>>
>> Hi,
>>>
>>> I want to make sure I am not loosing any precision in my code by 
>>> accidentally mixing BigFloat and Float64 (e.g. adding two numbers of 
>>> different precision).  I was thinking about replacing the definitions of 
>>> `+`, `-`, etc. for BigFloat but if you do that for all two argument 
>>> functions this would be a lot of redefining, so I started to wonder if 
>>> there is a more clever approach.  Is there any simple hack to get a warning 
>>> if this happens?
>>>
>>> Best,
>>> Paweł
>>>
>>> ​
>>
>

[julia-users] Re: Parametric types which add or delete fields.

2016-04-17 Thread 'Greg Plowman' via julia-users
Notwithstanding the explosion of possible types and the excellent advice 
and insight provided by Tim, you can get the following to compile and run.



typealias Color ASCIIString
typealias Horsepower Float64
typealias Model ASCIIString
typealias Year Int

type Car{
C<:Union{Color,Void},
H<:Union{Horsepower,Void},
M<:Union{Model,Void},
Y<:Union{Year,Void}
}
color::C
horsepower::H
model::M
year::Y
end

# outer constructor
function Car(; color=nothing, horsepower=nothing, model=nothing, 
year=nothing)
Car(color, horsepower, model, year)
end

# create cars by use keyword arguments in any order
myoldcar = Car(year=1967, horsepower=450.0, color="Red")
mynewcar = Car(year=2016, color="White")


# function operates on any homogeneous garage of cars with color and year 
defined
function Count_Red_1963{C<:Color,H,M,Y<:Year}(garage::Vector{Car{C,H,M,Y}})
count = 0
for car in garage
if car.color == "Red" && car.year == 1963
count += 1
end
end
return count
end

garage_color_model_year = [ Car(color="Red", model="GTX", year=1963) for i 
= 1:10 ]
# TODO: work out how to avoid messy conversion
garage_color_model_year = 
convert(Vector{typeof(garage_color_model_year[1])}, garage_color_model_year)

garage_color_year = [ Car(color="Red", year=y) for y = 1960:1969 ]
garage_color_year = convert(Vector{typeof(garage_color_year[1])}, 
garage_color_year)

Count_Red_1963(garage_color_model_year)
Count_Red_1963(garage_color_year)



On Friday, April 15, 2016 at 1:54:17 PM UTC+10, Anonymous wrote:

> So I have a pretty complex problem in designing my type data structure 
> which I haven't been able to solve.  Let's say I have an abstract car type:
>
> abstract AbstractCar
>
> now let's say I have the following possible features for a car:
>
> color
> horsepower
> model
> year
>
> Now I want to be able to create all possible composite concrete types 
> containing any combination of these features, so that would be 2^4=16 
> different composite types I need to define, and I would need to give them 
> all names, so for example one of these 16 composite types would be
>
> CarHorseModel <: AbstractCar
>   horsepower
>   model
> end
>
> Obviously this is untenable since the number of possible types grows 
> exponentially with the number of features.  Thus a different approach that 
> avoids this is to have one concrete type
>
> Car <: AbstractCar
>   color
>   horsepower
>   model
>   year
> end
>
> and then to set it up so that any features which I don't want to include 
> are set to nothing.  This avoids the problem above, but is 
> messy and inelegant.  However the bigger problem with it is that I want to 
> have a container type for all my cars, call this container type Garage, 
> and I want this container type to require that all cars in my garage have 
> the same features.  Thus in my original design with 16 separate composite 
> types, I could simply set up my container type to be of the form
>
> type Garage{C <: AbstractCar}
>   cars::Vector{C}
> end
>
> Unfortunately for the approach where I have a single Car type with all 
> the features included and those I don't want set to nothing, there is no 
> straight forward way to enforce this.  The situation is further complicated 
> because I then have various methods which I would like to dispatch on 
> certain types of garages.  For instance one method may only work for 
> garages which contain cars which have a color feature, maybe another 
> method only works on garages which have both a color feature and a year 
> feature.
>
> What I would like is something that works like a parametric type, but 
> instead of the parametric type changing the type of the fields, it 
> effectively decides what field names are included in my composite type.  So 
> for instance Car{Color, Year} would produce the type
>
> Car{Color, Year} <: AbstractCar
>   color::ASCIIString
>   year::Int
> end
>
> However! A further problem, is that say I have a method which works on all 
> garages which contain Car types which have a color feature, so that 
> includes 2^3=8 different possible Garage types (all those which contain 
> cars with a color feature), so this also grows exponentially with the 
> number of features.
>
> What does everyone think is the right way to handle this problem
>
>
>

[julia-users] Re: Should `append!(a::Vector{T}, items::NTuple{N, T})` be defined in Base?

2016-04-13 Thread 'Greg Plowman' via julia-users
Considering the existing append! is pretty loose wrt the items being 
appended, a simple extension to the signature might work:

append!{T}(a::Array{T,1}, items::Union{AbstractVector,Tuple})


You could extend this yourself to try it out.


On Thursday, April 14, 2016 at 4:07:45 AM UTC+10, Davide Lasagna wrote:

> Hi all,
>
> I need to extend a vector with contents from n-tuples i generate 
> iteratively. Currently 
>
> append!(a::Vector{T}, items::NTuple{N, T})
>
> is not defined, whereas the method 
>
> append!{T}(a::Array{T,1}, items::AbstractVector)
>
> exists and is defined in array.jl. 
>
> Anyone finds this useful to justify opening a pull request? What would be 
> a good implementation?
>


Re: [julia-users] Creating an empty 2D array and append values to it

2016-04-13 Thread 'Greg Plowman' via julia-users


> julia> reshape(d,3,2)

3x2 Array{ASCIIString,2}:

 "x1"  "y2"

 "y1"  "x3"

 "x2"  "y3"


This is because of Julia's column-major ordering.

you see the problem ? instead I would like to have :

x1 y1

x2 y2

..

xn yn 


In this case, you could use:

julia> reshape(d,2,3)'
3x2 Array{ASCIIString,2}:
 "x1"  "y1"
 "x2"  "y2"
 "x3"  "y3"



On Wednesday, April 13, 2016 at 12:38:41 AM UTC+10, Fred wrote:

> Thank you very much Tim !
>
> In fact I want to create an X,Y array so if I create a 1D array, I can 
> only append to it (x1,y1) then (x2,y2)... (xn, yn), because I calculate x1 
> before x2...
>
> julia> d = ["x1", "y1", "x2", "y2", "x3", "y3"]
> 6-element Array{ASCIIString,1}:
>  "x1"
>  "y1"
>  "x2"
>  "y2"
>  "x3"
>  "y3"
>
> julia> reshape(d,3,2)
> 3x2 Array{ASCIIString,2}:
>  "x1"  "y2"
>  "y1"  "x3"
>  "x2"  "y3"
>
> you see the problem ? instead I would like to have :
> x1 y1
> x2 y2
> ..
> xn yn 
>
> because I want to be able to work on columns and line ... of course 
> another easy solution is to use dataframes, but I tried with arrays because 
> the code should be faster... :)
>
> Le mardi 12 avril 2016 16:27:50 UTC+2, Tim Holy a écrit :
>>
>> Note that in `a = Array{Float64,2}`, `a` is a *type*, not an *instance*. 
>> You 
>> presumably mean `a = Array(Float64, 0, 0)`. 
>>
>> But Yichao is right that you can't grow a 2d array, only a 1d one. 
>>
>> Best, 
>> --Tim 
>>
>

[julia-users] Re: How to initialize a Matrix{T}?

2016-04-08 Thread 'Greg Plowman' via julia-users
Maybe something like:

x = Array{Int}(3,4,0)

x = vec(x)
append!(x, 1:12)
x = reshape(x,3,4,1)

x = vec(x)
append!(x, 13:24)
x = reshape(x,3,4,2)

Of course you could wrap it into a more convenient function.


On Friday, April 8, 2016 at 7:31:30 PM UTC+10, Sisyphuss wrote:

> I have a related question:
>
> How to initialize an array of fixed shape except one dimension, such as `m 
> * n * 0`. 
> And I would like to dynamically increase the last dimension later.
>


[julia-users] Re: pmap scheduling and idle workers near the "end" of a job

2016-04-06 Thread 'Greg Plowman' via julia-users
In that case, try sorting the tasks in descending order of complexity (i.e. 
start longest running tasks first).


On Thursday, April 7, 2016 at 9:23:56 AM UTC+10, Thomas Covert wrote:

> Its hard to construct a MWE without the data I am using, but I can give a 
> bit more detail.  
>
> There are 20 workers and about 4200 elements in lst, so using your 
> terminology, I've got many more tasks than workers.  If lst were sorted by 
> complexity (its not, I randomized the order), the complexity of item i is 
> roughly cubic in i.
>
> On Wednesday, April 6, 2016 at 4:29:16 PM UTC-5, Greg Plowman wrote:
>>
>>
>> It's difficult to comment without knowing more detail about numbers of 
>> workers, their relative speed, number of tasks and their expected 
>> completion times.
>>
>> As an extreme example, say you have 4 workers (all of the same speed) and 
>> 2x15-minute tasks and 16x1-minute tasks.
>>
>> Depending on how this is scheduled, this will take between 15 to 19 
>> minutes. Optimally:
>> Worker 1: 15
>> Worker 2: 15
>> Worker 3: 1 1 1 1 1 1 1 1 
>> Worker 4: 1 1 1 1 1 1 1 1
>>
>> After 8 minutes, workers 3 and 4 will be idle, and remain idle for the 
>> remaining 7 minutes before workers 1 and 2 finish.
>>
>> I had a similar problem where I had fast and slow workers, and initially 
>> split the work into a number of tasks similar to the number of workers.
>> This left an overhang similar to what you describe.
>> In my case more granularity helped. Splitting into many tasks so that 
>> #tasks >> #workers helped.
>>
>>
>>
>> On Thursday, April 7, 2016 at 2:21:28 AM UTC+10, Thomas Covert wrote:
>>
>>> The manual suggests that pmap(f, lst) will dynamically "feed" elements 
>>> of lst to the function f as each worker completes its previous assignment, 
>>> and in my read of the code for pmap, this is indeed what it does.
>>>
>>> However, I have found that, in practice, many of the workers that I spin 
>>> up for pmap tasks are idle for the last, say, half of the total time needed 
>>> to complete the task.  In my pmap usage, it is the case that the complexity 
>>> of the workload varies across elements of lst, so that some elements should 
>>> take a long time to compute (say, 15 minutes on a core of my machine) and 
>>> others a short time (less than 1 minute).  Knowing about this heterogeneity 
>>> and observing this pattern of idle workers after about half of the work is 
>>> done would normally lead me to think that pmap is scheduling workers ahead 
>>> of time, not dynamically.  Some workers will get "lucky" and have easier 
>>> than average workload, and others are unlucky and have harder workload.  At 
>>> the end of the calculation, only the unlucky workers are still working. 
>>>  However, this isn't what pmap is doing, so I'm kinda confused. 
>>>
>>> Am I crazy?  The documentation for pmap says that it is scheduling tasks 
>>> dynamically and I am pre-randomizing the order of work in lst so that 
>>> worker 1 doesn't get easier tasks, in expectation, than worker N.  Or is it 
>>> more likely that I've got a bug somewhere?
>>>
>>>
>>>

[julia-users] Re: Help with convoluted types and Vararg

2016-04-06 Thread 'Greg Plowman' via julia-users
Pair is a parametric type, and in Julia these are invariant, 
meaning element subtyping does not imply pair subtyping.

In your case, the pair elements are subtypes:

Tuple{Function,Int,Int,Int} <: Tuple{Function,Vararg{Int}} # true
Int <: Int # true

but the Pair is not:

Pair{Tuple{Function,Int,Int,Int}, Int} <: Pair{Tuple{Function,Vararg{Int}}, 
Int} # false


On Wednesday, April 6, 2016 at 4:51:39 AM UTC+10, Seth wrote:

> Hi all,
>
> I have the following on 0.4.6-pre+18:
>
> z = [Pair((+,1,5,7), 3), Pair((-,6,5,3,5,8), 1)]
> type Foo
> x::Array{Pair{Tuple{Function, Vararg{Int}}, Int}}
> end
>
>
> and I'm getting
>
> julia> Foo(z)
> ERROR: MethodError: `convert` has no method matching 
> convert(::Type{Pair{Tuple{Function,Vararg{Int64}},Int64}}, 
> ::Pair{Tuple{Function,Int64,Int64,Int64},Int64})
> This may have arisen from a call to the constructor 
> Pair{Tuple{Function,Vararg{Int64}},Int64}(...),
> since type constructors fall back to convert methods.
> Closest candidates are:
>   Pair{A,B}(::Any, ::Any)
>   call{T}(::Type{T}, ::Any)
>   convert{T}(::Type{T}, ::T)
>  in copy! at abstractarray.jl:310
>  in call at none:2
>
>
> It's probably a stupid oversight, but I'm stuck. Can someone point me to 
> the error?
>


[julia-users] Re: pmap scheduling and idle workers near the "end" of a job

2016-04-06 Thread 'Greg Plowman' via julia-users

It's difficult to comment without knowing more detail about numbers of 
workers, their relative speed, number of tasks and their expected 
completion times.

As an extreme example, say you have 4 workers (all of the same speed) and 
2x15-minute tasks and 16x1-minute tasks.

Depending on how this is scheduled, this will take between 15 to 19 
minutes. Optimally:
Worker 1: 15
Worker 2: 15
Worker 3: 1 1 1 1 1 1 1 1 
Worker 4: 1 1 1 1 1 1 1 1

After 8 minutes, workers 3 and 4 will be idle, and remain idle for the 
remaining 7 minutes before workers 1 and 2 finish.

I had a similar problem where I had fast and slow workers, and initially 
split the work into a number of tasks similar to the number of workers.
This left an overhang similar to what you describe.
In my case more granularity helped. Splitting into many tasks so that 
#tasks >> #workers helped.



On Thursday, April 7, 2016 at 2:21:28 AM UTC+10, Thomas Covert wrote:

> The manual suggests that pmap(f, lst) will dynamically "feed" elements of 
> lst to the function f as each worker completes its previous assignment, and 
> in my read of the code for pmap, this is indeed what it does.
>
> However, I have found that, in practice, many of the workers that I spin 
> up for pmap tasks are idle for the last, say, half of the total time needed 
> to complete the task.  In my pmap usage, it is the case that the complexity 
> of the workload varies across elements of lst, so that some elements should 
> take a long time to compute (say, 15 minutes on a core of my machine) and 
> others a short time (less than 1 minute).  Knowing about this heterogeneity 
> and observing this pattern of idle workers after about half of the work is 
> done would normally lead me to think that pmap is scheduling workers ahead 
> of time, not dynamically.  Some workers will get "lucky" and have easier 
> than average workload, and others are unlucky and have harder workload.  At 
> the end of the calculation, only the unlucky workers are still working. 
>  However, this isn't what pmap is doing, so I'm kinda confused. 
>
> Am I crazy?  The documentation for pmap says that it is scheduling tasks 
> dynamically and I am pre-randomizing the order of work in lst so that 
> worker 1 doesn't get easier tasks, in expectation, than worker N.  Or is it 
> more likely that I've got a bug somewhere?
>
>
>

[julia-users] Re: enforcing homogeneity of vector elements in function signature

2016-04-06 Thread 'Greg Plowman' via julia-users
Actually, I'm totally wrong.
The Union won't work.
Sorry for bad post.


On Wednesday, April 6, 2016 at 1:52:15 PM UTC+10, Greg Plowman wrote:

>
> A workaround would be to have two methods, one for the homogeneous 
>> elements in the first parameter, as you suggest, and a second for a vector 
>> with homogeneous elements in both parameters, with both T, N specified in 
>> the signature. But I have to write an extra method...
>>
>
> As pointed out, you can't have a *single* type for the method signature 
> for both homogeneous and heterogeneous N, because of Julia's parametric 
> type invariance.
>
> However, it seems you might want to write two methods to specialise the 
> implementation.
>
> If not, you can
> use a common functional body as Jeffrey suggested, or
> use an empty signature (will match anything, but no performance penalty)
> use a Union in the signature:
>
> function bar{T,N}(x::Union{Vector{Foo{T}},Vector{Foo{T,N}}})
> println("Hello, Foo!")
> end
>
>
> On Tuesday, April 5, 2016 at 6:46:21 AM UTC+10, Davide Lasagna wrote:
>
>> Thanks, yes, I have tried this, but did not mention what happens.
>>
>> For the signature you suggest, you get a `MethodError` in the case the 
>> vector `x` is homogeneous in both parameters.
>>
>> Look at this code
>>
>> type Foo{T, N}
>> a::NTuple{N, T}
>> end
>>
>> function make_homogeneous_Foos(M)
>> fs = Foo{Float64, 2}[]
>> for i = 1:M
>> f = Foo{Float64, 2}((0.0, 0.0))
>> push!(fs, f)
>> end
>> fs
>> end
>>
>> function bar{T}(x::Vector{Foo{T}})
>> println("Hello, Foo!")
>> end
>>
>> const fs = make_homogeneous_Foos(100)
>>
>> bar(fs)
>>
>> which results in 
>>
>> ERROR: LoadError: MethodError: `bar` has no method matching 
>> bar(::Array{Foo{Float64,2},1})
>>
>> A workaround would be to have two methods, one for the homogeneous 
>> elements in the first parameter, as you suggest, and a second for a vector 
>> with homogeneous elements in both parameters, with both T, N specified in 
>> the signature. But I have to write an extra method...
>>
>>
>> On Monday, April 4, 2016 at 9:32:55 PM UTC+1, John Myles White wrote:
>>>
>>> Vector{Foo{T}}?
>>>
>>> On Monday, April 4, 2016 at 1:25:46 PM UTC-7, Davide Lasagna wrote:

 Hi all, 

 Consider the following example code

 type Foo{T, N}
 a::NTuple{N, T}
 end

 function make_Foos(M)
 fs = Foo{Float64}[]
 for i = 1:M
 N = rand(1:2)
 f = Foo{Float64, N}(ntuple(i->0.0, N))
 push!(fs, f)
 end
 fs
 end

 function bar{F<:Foo}(x::Vector{F})
 println("Hello, Foo!")
 end

 const fs = make_Foos(100)

 bar(fs)

 What would be the signature of `bar` to enforce that all the entries of 
 `x` have the same value for the first parameter T? As it is now, `x` could 
 contain an `Foo{Float64}` and a `Foo{Int64}`, whereas I would like to 
 enforce homogeneity of the vector elements in the first parameter.

 Thanks




[julia-users] Re: enforcing homogeneity of vector elements in function signature

2016-04-05 Thread 'Greg Plowman' via julia-users


> A workaround would be to have two methods, one for the homogeneous 
> elements in the first parameter, as you suggest, and a second for a vector 
> with homogeneous elements in both parameters, with both T, N specified in 
> the signature. But I have to write an extra method...
>

As pointed out, you can't have a *single* type for the method signature for 
both homogeneous and heterogeneous N, because of Julia's parametric type 
invariance.

However, it seems you might want to write two methods to specialise the 
implementation.

If not, you can
use a common functional body as Jeffrey suggested, or
use an empty signature (will match anything, but no performance penalty)
use a Union in the signature:

function bar{T,N}(x::Union{Vector{Foo{T}},Vector{Foo{T,N}}})
println("Hello, Foo!")
end


On Tuesday, April 5, 2016 at 6:46:21 AM UTC+10, Davide Lasagna wrote:

> Thanks, yes, I have tried this, but did not mention what happens.
>
> For the signature you suggest, you get a `MethodError` in the case the 
> vector `x` is homogeneous in both parameters.
>
> Look at this code
>
> type Foo{T, N}
> a::NTuple{N, T}
> end
>
> function make_homogeneous_Foos(M)
> fs = Foo{Float64, 2}[]
> for i = 1:M
> f = Foo{Float64, 2}((0.0, 0.0))
> push!(fs, f)
> end
> fs
> end
>
> function bar{T}(x::Vector{Foo{T}})
> println("Hello, Foo!")
> end
>
> const fs = make_homogeneous_Foos(100)
>
> bar(fs)
>
> which results in 
>
> ERROR: LoadError: MethodError: `bar` has no method matching 
> bar(::Array{Foo{Float64,2},1})
>
> A workaround would be to have two methods, one for the homogeneous 
> elements in the first parameter, as you suggest, and a second for a vector 
> with homogeneous elements in both parameters, with both T, N specified in 
> the signature. But I have to write an extra method...
>
>
> On Monday, April 4, 2016 at 9:32:55 PM UTC+1, John Myles White wrote:
>>
>> Vector{Foo{T}}?
>>
>> On Monday, April 4, 2016 at 1:25:46 PM UTC-7, Davide Lasagna wrote:
>>>
>>> Hi all, 
>>>
>>> Consider the following example code
>>>
>>> type Foo{T, N}
>>> a::NTuple{N, T}
>>> end
>>>
>>> function make_Foos(M)
>>> fs = Foo{Float64}[]
>>> for i = 1:M
>>> N = rand(1:2)
>>> f = Foo{Float64, N}(ntuple(i->0.0, N))
>>> push!(fs, f)
>>> end
>>> fs
>>> end
>>>
>>> function bar{F<:Foo}(x::Vector{F})
>>> println("Hello, Foo!")
>>> end
>>>
>>> const fs = make_Foos(100)
>>>
>>> bar(fs)
>>>
>>> What would be the signature of `bar` to enforce that all the entries of 
>>> `x` have the same value for the first parameter T? As it is now, `x` could 
>>> contain an `Foo{Float64}` and a `Foo{Int64}`, whereas I would like to 
>>> enforce homogeneity of the vector elements in the first parameter.
>>>
>>> Thanks
>>>
>>>
>>>

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-05 Thread 'Greg Plowman' via julia-users
Thanks for your relies. I'm starting to understand some of this now.
In particular, I'm learning there are many aspects to dynamic dispatch.

Also worth noting the difference between:

if isa(features[i], A)
retval += evaluate(features[i]::A)
elseif isa(features[i], B)
retval += evaluate(features[i]::B)
else
retval += evaluate(features[i])
end

and 

if isa(features[i], A)
x = evaluate(features[i]::A)
elseif isa(features[i], B)
x = evaluate(features[i]::B)
else
x = evaluate(features[i])
end
retval = x

Execution times depend on:
whether other subtypes exist or not
whether evaluate() for these subtypes return consistent types
whether run-time vector actually contains any of these other subtypes

In nearly all cases, method 1 is much faster.

Thanks again for all your help. It's much appreciated.


On Tuesday, April 5, 2016 at 7:39:41 AM UTC+10, Yichao Yu wrote:

>
> On Apr 4, 2016 1:36 PM, "Cedric St-Jean"  > wrote:
> >
> > I'm not a compiler dev, but here's how I understand it:
> >
>
> Sorry forgot to reply
>
> >
> > On Sunday, April 3, 2016 at 6:32:17 PM UTC-4, Greg Plowman wrote:
> >>
> >> It seems to me that slowness of dynamic dispatch and type instability 
> are orthogonal.
> >>
> >> I mean is dynamic dispatch inherently slow, or is it slow because it 
> involves type instability?
> >
> >
> > Type instability (uncertainty) leads to multiple-dispatch, and multiple 
> dispatch leads to more type instability (in general, but not always), as 
> the return type is harder to pin down.
> >  
> >>
> >> @code_warntype slow(features) suggests that the return type from 
> evaluate() is Float64, so I can't see any type instability.
> >> If this is correct, then why is dynamic dispatch so much slower than 
> run-time if statements?
> >
>
> I didn't know we are doing this optimization. I think the actual limit for 
> type inference is how many method actually matches. If you add more subtype 
> of Feature with different implementation of the function, I think you'll 
> see type inference give up at some point (the limit used to be 4 iirc)
>
> Currently, whenever type inference cannot determine a single function to 
> call, it fallback to a full dynamic dispatch at runtime. This is of course 
> not the best way to do it See 
> https://github.com/JuliaLang/julia/issues/10805 and 
> https://github.com/JuliaLang/julia/pull/11862.
>
> >
> > Each object contains an ID which is an integer (or a pointer - I don't 
> know, but it's the same thing).
> >
> > if isa(x,A)
> >
> > checks wheter x's ID is equal to A's ID. That's an integer comparison, 
> and very fast. In contrast, multiple-dispatch involves looking up the 
> method table for which function to call when x is an Int. I don't know how 
> it's implemented, but a dictionary look-up is likely, and that's much more 
> costly.
>
> A "hidden" cost of the dynamic dispatch is that the result needs to be 
> boxed. Since that's the only way one can return an arbitrary type object in 
> C. There are thought on how this can be improved but they involve more 
> complexity and it is not particularly clear if this particular case is 
> worth optimizing for (i.e. if it will regress more important cases for 
> example).
>
> >  
> >>
> >>
> >> For my own understanding, I was almost hoping that slow() was not 
> type-stable,
> >
> >
> > slow can be type-stable if the compiler assumes that no other 
> subtype/method will be added, which it seems to be doing. There's still the 
> multiple-dispatch cost, since it doesn't know which method to call. To 
> demonstrate the point about type-stability, if I define 
> >
> > type C <: Feature end
> > evaluate(f::C) = 1:3
> >
> > at the same time as I defined A and B, then slow becomes twice as slow 
> as before (430 microseconds vs. 210), even though I haven't even 
> instantiated any C object. That's because the compiler no longer assumes 
> that `evaluate` returns a Float (check code_warntype) and thus needs a 
> second multiple-dispatch to make the + call.
> >  
> >>
> >> and annotating with evaluate(features[i])::Float64 would let the 
> compiler produce faster code,
> >> but it makes no difference.
> >>
> >>
> >> On Sunday, April 3, 2016 at 11:07:44 PM UTC+10, Cedric St-Jean wrote:
> >>>
> >>> Good call, it was already pointed out in that thread.
> >>>
> >>> On Sat, Apr 2, 2016 at 11:11 PM, Yichao Yu  wrote:
> 
>  On Sat, Apr 2, 2016 at 10:53 PM, Cedric St-Jean  
> wrote:
>  > That's actually a compiler bug, nice!
>  >
>  > abstract Feature
>  >
>  > type A <: Feature end
>  > evaluate(f::A) = 1.0
>  >
>  > foo(features::Vector{Feature}) = isa(features[1], A) ?
>  > evaluate(features[1]::A) : evaluate(features[1])
>  >
>  > @show foo(Feature[A()])
>  >
>  > type C <: Feature end
>  > evaluate(f::C) = 100
>  >
>  > @show foo(Feature[C()])
>  >
>  > yields
>  >
>  > 

Re: [julia-users] Re: dispatch on type of tuple from ...

2016-04-03 Thread 'Greg Plowman' via julia-users
I saw your repost on julia-dev, but replying here.

m2 won't work because it expects a series of tuple arguments (which if 
supplied would slurp up into a tuple of tuples).

m3 seems the way to go. 

I wouldn't necessarily look at it as indirect however.
Think of it as one function with 2 methods: one version is for a single 
tuple argument, the other is for the equivalent expressed as 
multiple arguments.
Also, you can define the 2 versions as m3 rather than m3 and _m3.

In fact you could use dispatch to add error processing with a third method:

module Foos
type Foo{T <: Tuple}
end
m4{T<:Tuple}(f::Foo{T}, index::Tuple) = error("expected index type $T, 
got $(typeof(index))")
m4{T<:Tuple}(f::Foo{T}, index::T) = 42
m4(f::Foo, index...) = m4(f, index)
end

f1 = Foos.Foo{Tuple{Int}}()
Foos.m4(f1, 9)

f2 = Foos.Foo{Tuple{Int,Symbol}}()
Foos.m4(f2, (9, :a))
Foos.m4(f2, 9, :a)

Foos.m4(f2, (9.0, :a))  # error, wrong types
Foos.m4(f2, 9.0, :a)# error, wrong types



On Thursday, March 31, 2016 at 1:41:56 AM UTC+11, Tamas Papp wrote:

> Hi Bill, 
>
> It works for a single argument, but not for multiple ones. I have a 
> self-contained example: 
>
> --8<---cut here---start->8--- 
> module Foos 
>
> type Foo{T <: Tuple} 
> end 
>
> m1{T}(f::Foo{Tuple{T}}, index::T...) = 42 # suggested by Bill Hart 
>
> m2{T}(f::Foo{T}, index::T...) = 42 # what I thought would work 
>
> _m3{T}(f::Foo{T}, index::T) = 42 # indirect solution 
> m3{T}(f::Foo{T}, index...) = _m3(f, index) 
>
> end 
>
> f1 = Foos.Foo{Tuple{Int}}() 
>
> Foos.m1(f1, 9)  # works 
> Foos.m2(f1, 9)  # won't work 
> Foos.m3(f1, 9)  # works 
>
> f2 = Foos.Foo{Tuple{Int,Symbol}}() 
>
> Foos.m1(f2, 9, :a)   # won't work 
> Foos.m2(f2, 9, :a)   # won't work 
> Foos.m3(f2, 9, :a)   # indirect, works 
> --8<---cut here---end--->8--- 
>
> For more than one argument, only the indirect solution works. 
>
> Best, 
>
> Tamas 
>
> On Wed, Mar 23 2016, via julia-users wrote: 
>
> > The following seems to work, but I'm not sure whether it was what you 
> > wanted: 
> > 
> > import Base.getindex 
> > 
> > type Foo{T <: Tuple} 
> > end 
> > 
> > getindex{T}(f::Foo{Tuple{T}}, index::T...) = 42 
> > 
> > Foo{Tuple{Int}}()[9] 
> > 
> > Bill. 
> > 
> > On Wednesday, 23 March 2016 14:38:20 UTC+1, Tamas Papp wrote: 
> >> 
> >> Hi, 
> >> 
> >> My understanding was that ... in a method argument forms a tuple, but I 
> >> don't know how to dispatch on that. Self-contained example: 
> >> 
> >> import Base.getindex 
> >> 
> >> type Foo{T <: Tuple} 
> >> end 
> >> 
> >> getindex{T}(f::Foo{T}, index::T...) = 42 
> >> 
> >> Foo{Tuple{Int}}()[9] ## won't match 
> >> 
> >> ## workaround with wrapper: 
> >> 
> >> _mygetindex{T}(f::Foo{T}, index::T) = 42 
> >> mygetindex{T}(f::Foo{T}, index...) = _mygetindex(f, index) 
> >> 
> >> mygetindex(Foo{Tuple{Int}}(), 9) 
> >> 
> >> Is it possible to make it work without a wrapper in current Julia? 
> >> 
> >> Best, 
> >> 
> >> Tamas 
> >> 
>
>

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-03 Thread 'Greg Plowman' via julia-users
Thanks for your replies. 
I'm sorry to trouble you again, but I'm still confused about general 
concepts.

It seems to me that slowness of dynamic dispatch and type instability are 
orthogonal.
I mean is dynamic dispatch inherently slow, or is it slow because it 
involves type instability?

@code_warntype slow(features) suggests that the return type from evaluate() 
is Float64, so I can't see any type instability.
If this is correct, then why is dynamic dispatch so much slower than 
run-time if statements?

For my own understanding, I was almost hoping that slow() was not 
type-stable, 
and annotating with evaluate(features[i])::Float64 would let the compiler 
produce faster code,
but it makes no difference.


On Sunday, April 3, 2016 at 11:07:44 PM UTC+10, Cedric St-Jean wrote:

> Good call, it was already pointed ou 
> t in 
> that thread.
>
> On Sat, Apr 2, 2016 at 11:11 PM, Yichao Yu  > wrote:
>
>> On Sat, Apr 2, 2016 at 10:53 PM, Cedric St-Jean > > wrote:
>> > That's actually a compiler bug, nice!
>> >
>> > abstract Feature
>> >
>> > type A <: Feature end
>> > evaluate(f::A) = 1.0
>> >
>> > foo(features::Vector{Feature}) = isa(features[1], A) ?
>> > evaluate(features[1]::A) : evaluate(features[1])
>> >
>> > @show foo(Feature[A()])
>> >
>> > type C <: Feature end
>> > evaluate(f::C) = 100
>> >
>> > @show foo(Feature[C()])
>> >
>> > yields
>> >
>> > foo(Feature[A()]) = 1.0
>> >
>> > foo(Feature[C()]) = 4.94e-322
>> >
>> >
>> > That explains why performance was the same on your computer: the 
>> compiler
>> > was making an incorrect assumption about the return type of `evaluate`. 
>> Or
>> > maybe it's an intentional gamble by the Julia devs, for the sake of
>> > performance.
>> >
>> > I couldn't find any issue describing this. Yichao?
>>
>> This is effectively #265. It's not always predictable what assumption
>> the compiler makes now...
>>
>> >
>> >
>> > On Saturday, April 2, 2016 at 10:16:59 PM UTC-4, Greg Plowman wrote:
>> >>
>> >> Thanks Cedric and Yichao.
>> >>
>> >> This makes sense that there might be new subtypes and associated
>> >> specialised methods. I understand that now. Thanks.
>> >>
>> >> On my machine (v0.4.5 Windows), fast() and pretty_fast() seem to run in
>> >> similar time.
>> >> So I looked as @code_warntype as Yichao suggested and get the 
>> following.
>> >> I don't fully know how to interpret output, but return type from the 
>> final
>> >> "catchall" evaluate() seems to be inferred/asserted as Float64 (see
>> >> highlighted yellow line below)
>> >>
>> >> Would this explain why pretty_fast() seems to be as efficient as 
>> fast()?
>> >>
>> >> Why is the return type being inferred asserted as Float64?
>> >>
>> >>
>> >> julia> @code_warntype fast(features)
>> >> Variables:
>> >>   features::Array{Feature,1}
>> >>   retval::Float64
>> >>   #s1::Int64
>> >>   i::Int64
>> >>
>> >> Body:
>> >>   begin  # none, line 2:
>> >>   retval = 0.0 # none, line 3:
>> >>   GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
>> >>   GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1,
>> >> 
>> :(((top(getfield))(Base.Intrinsics,:select_value)::I)((Base.sle_int)(1,GenSym(2))::Bool,GenSym(2),(Base.box)
>> >> (Int64,(Base.sub_int)(1,1)))::Int64)))
>> >>   #s1 = (top(getfield))(GenSym(0),:start)::Int64
>> >>   unless (Base.box)(Base.Bool,(Base.not_int)(#s1::Int64 ===
>> >> 
>> (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
>> >> goto 1
>> >>   2:
>> >>   GenSym(3) = #s1::Int64
>> >>   GenSym(4) = (Base.box)(Base.Int,(Base.add_int)(#s1::Int64,1))
>> >>   i = GenSym(3)
>> >>   #s1 = GenSym(4) # none, line 4:
>> >>   unless
>> >> 
>> (Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.A)::Bool
>> >> goto 4 # none, line 5:
>> >>   retval =
>> >> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,1.0))
>> >>   goto 5
>> >>   4:  # none, line 7:
>> >>   retval =
>> >> (Base.box)(Base.Float64,(Base.add_float)(retval::Float64,0.0))
>> >>   5:
>> >>   3:
>> >>   unless
>> >> 
>> (Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.not_int)(#s1::Int64
>> >> === (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0)
>> >> ,:stop)::Int64,1))::Bool goto 2
>> >>   1:
>> >>   0:  # none, line 10:
>> >>   return retval::Float64
>> >>   end::Float64
>> >>
>> >>
>> >> julia> @code_warntype pretty_fast(features)
>> >> Variables:
>> >>   features::Array{Feature,1}
>> >>   retval::Float64
>> >>   #s1::Int64
>> >>   i::Int64
>> >>
>> >> Body:
>> >>   begin  # none, line 2:
>> >>   retval = 0.0 # none, line 3:
>> >>   GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
>> >>   GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1,
>> >> 
>> 

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread 'Greg Plowman' via julia-users
Thanks Cedric and Yichao.

This makes sense that there might be new subtypes and associated 
specialised methods. I understand that now. Thanks.

On my machine (v0.4.5 Windows), fast() and pretty_fast() seem to run in 
similar time.
So I looked as @code_warntype as Yichao suggested and get the following.
I don't fully know how to interpret output, but return type from the final 
"catchall" evaluate() seems to be inferred/asserted as Float64 (see 
highlighted yellow line below)

Would this explain why pretty_fast() seems to be as efficient as fast()?

Why is the return type being inferred asserted as Float64?


julia> @code_warntype fast(features)
Variables:
  features::Array{Feature,1}
  retval::Float64
  #s1::Int64
  i::Int64

Body:
  begin  # none, line 2:
  retval = 0.0 # none, line 3:
  GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
  GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1, 
:(((top(getfield))(Base.Intrinsics,:select_value)::I)((Base.sle_int)(1,GenSym(2))::Bool,GenSym(2),(Base.box)
(Int64,(Base.sub_int)(1,1)))::Int64)))
  #s1 = (top(getfield))(GenSym(0),:start)::Int64
  unless (Base.box)(Base.Bool,(Base.not_int)(#s1::Int64 === 
(Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
 
goto 1
  2:
  GenSym(3) = #s1::Int64
  GenSym(4) = (Base.box)(Base.Int,(Base.add_int)(#s1::Int64,1))
  i = GenSym(3)
  #s1 = GenSym(4) # none, line 4:
  unless 
(Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.A)::Bool
 
goto 4 # none, line 5:
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,1.0))
  goto 5
  4:  # none, line 7:
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,0.0))
  5:
  3:
  unless 
(Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.not_int)(#s1::Int64
 
=== (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0)
,:stop)::Int64,1))::Bool goto 2
  1:
  0:  # none, line 10:
  return retval::Float64
  end::Float64


julia> @code_warntype pretty_fast(features)
Variables:
  features::Array{Feature,1}
  retval::Float64
  #s1::Int64
  i::Int64

Body:
  begin  # none, line 2:
  retval = 0.0 # none, line 3:
  GenSym(2) = (Base.arraylen)(features::Array{Feature,1})::Int64
  GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1, 
:(((top(getfield))(Base.Intrinsics,:select_value)::I)((Base.sle_int)(1,GenSym(2))::Bool,GenSym(2),(Base.box)
(Int64,(Base.sub_int)(1,1)))::Int64)))
  #s1 = (top(getfield))(GenSym(0),:start)::Int64
  unless (Base.box)(Base.Bool,(Base.not_int)(#s1::Int64 === 
(Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
 
goto 1
  2:
  GenSym(4) = #s1::Int64
  GenSym(5) = (Base.box)(Base.Int,(Base.add_int)(#s1::Int64,1))
  i = GenSym(4)
  #s1 = GenSym(5) # none, line 4:
  unless 
(Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.A)::Bool
 
goto 4 # none, line 5:
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,1.0))
  goto 6
  4:  # none, line 6:
  unless 
(Main.isa)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature,Main.B)::Bool
 
goto 5 # none, line 7:
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,0.0))
  goto 6
  5:  # none, line 9:
  GenSym(3) = 
(Main.evaluate)((Base.arrayref)(features::Array{Feature,1},i::Int64)::Feature)::Float64
  retval = 
(Base.box)(Base.Float64,(Base.add_float)(retval::Float64,GenSym(3)))
  6:
  3:
  unless 
(Base.box)(Base.Bool,(Base.not_int)((Base.box)(Base.Bool,(Base.not_int)(#s1::Int64
 
=== (Base.box)(Base.Int,(Base.add_int)((top(getfield))(GenSym(0)
,:stop)::Int64,1))::Bool goto 2
  1:
  0:  # none, line 12:
  return retval::Float64
  end::Float64

Julia>



On Sunday, April 3, 2016 at 8:04:35 AM UTC+10, Cedric St-Jean wrote:

>
>
> On Saturday, April 2, 2016 at 5:39:45 PM UTC-4, Greg Plowman wrote:
>>
>> Cedric,
>> On my machine fast() and pretty_fast() run in the roughly the same time.
>> Are you sure pre-compiled first?
>>
>
> Yep. I'm using @benchmark on Julia 0.4.5 on OSX, and the time difference 
> is definitely > 2X. 
>  
>
>>  
>> 1. If you add a default fallback method, say, evaluate(f::Feature) = -1.0
>> Theoretically, would that inform the compiler that evaluate() always 
>> returns a Float64 and therefore is type stable?
>>
>
> No, because I might write 
>
> type C <: Feature
> end
> evaluate(f::C) = 100
>
> and a Vector{Feature} might contain a C. Abstract types are open-ended, 
> new types can be created a runtime, so the compiler can't assume anything 
> about them, which is why they're not useful for performance. See here 
> 
> .
>
>  
>
>> Does dynamic dispatch mean compiler has to look up method at run time 
>> 

Re: [julia-users] dispatch slowdown when iterating over array with abstract values

2016-04-02 Thread 'Greg Plowman' via julia-users
Cedric,
On my machine fast() and pretty_fast() run in the roughly the same time.
Are you sure pre-compiled first?

Yichao,

> The compiler has no idea what the return type of the third one so this 
> version is still type unstable and you get dynamic dispatch at every 
> iteration for the floating point add

 
1. If you add a default fallback method, say, evaluate(f::Feature) = -1.0
Theoretically, would that inform the compiler that evaluate() always 
returns a Float64 and therefore is type stable?

2. I don't get the connection between "dynamic dispatch" and "type 
stability".
Does dynamic dispatch mean compiler has to look up method at run time 
(because it can't know type ahead of time).
This is equivalent to explicitly coding "static dispatch" with if 
statements?
If so then why is it so much slower?
Providing a fallback method as in 1. or explicitly returning Float64 
doesn't seem to improve speed, which naively suggests that slowness is from 
dynamic dispatch not from type instability???

3. Also, on my machine pretty_fast() is as fast as fast(). Why is this so, 
if pretty_fast() is supposedly type unstable?

As you can probably tell, I'm pretty confused on this.


On Sunday, April 3, 2016 at 6:34:09 AM UTC+10, Cedric St-Jean wrote:

> Thank you for the detailed explanation. I tried it out:
>
> function pretty_fast(features::Vector{Feature})
> retval = 0.0
> for i in 1 : length(features)
> if isa(features[i], A)
> x = evaluate(features[i]::A)
> elseif isa(features[i], B)
> x = evaluate(features[i]::B)
> else
> x = evaluate(features[i])
> end
> retval += x
> end
> retval
> end
>
> On my laptop, fast runs in 10 microseconds, pretty_fast in 30, and slow in 
> 210.
>
> On Saturday, April 2, 2016 at 12:24:18 PM UTC-4, Yichao Yu wrote:
>>
>> On Sat, Apr 2, 2016 at 12:16 PM, Tim Wheeler  
>> wrote: 
>> > Thank you for the comments. In my original code it means the difference 
>> > between a 30 min execution with memory allocation in the Gigabytes and 
>> a few 
>> > seconds of execution with only 800 bytes using the second version. 
>> > I thought under-the-hood Julia basically runs those if statements 
>> anyway for 
>> > its dispatch, and don't know why it needs to allocate any memory. 
>> > Having the if-statement workaround will be fine though. 
>>
>> Well, if you have a lot of these cheap functions being dynamically 
>> dispatched I think it is not a good way to use the type. Depending on 
>> your problem, you may be better off using a enum/flags/dict to 
>> represent the type/get the values. 
>>
>> The reason for the allocation is that the return type is unknown. It 
>> should be obvious to see if you check your code with code_warntype. 
>>
>> > 
>> > On Saturday, April 2, 2016 at 7:26:11 AM UTC-7, Cedric St-Jean wrote: 
>> >> 
>> >> 
>> >>> Therefore there's no way the compiler can rewrite the slow version to 
>> the 
>> >>> fast version. 
>> >> 
>> >> 
>> >> It knows that the element type is a Feature, so it could produce: 
>> >> 
>> >> if isa(features[i], A) 
>> >> retval += evaluate(features[i]::A) 
>> >> elseif isa(features[i], B) 
>> >> retval += evaluate(features[i]::B) 
>> >> else 
>> >> retval += evaluate(features[i]) 
>> >> end 
>> >> 
>> >> and it would make sense for abstract types that have few subtypes. I 
>> >> didn't realize that dispatch was an order of magnitude slower than 
>> type 
>> >> checking. It's easy enough to write a macro generating this expansion, 
>> too. 
>> >> 
>> >> On Saturday, April 2, 2016 at 2:05:20 AM UTC-4, Yichao Yu wrote: 
>> >>> 
>> >>> On Fri, Apr 1, 2016 at 9:56 PM, Tim Wheeler  
>> >>> wrote: 
>> >>> > Hello Julia Users. 
>> >>> > 
>> >>> > I ran into a weird slowdown issue and reproduced a minimal working 
>> >>> > example. 
>> >>> > Maybe someone can help shed some light. 
>> >>> > 
>> >>> > abstract Feature 
>> >>> > 
>> >>> > type A <: Feature end 
>> >>> > evaluate(f::A) = 1.0 
>> >>> > 
>> >>> > type B <: Feature end 
>> >>> > evaluate(f::B) = 0.0 
>> >>> > 
>> >>> > function slow(features::Vector{Feature}) 
>> >>> > retval = 0.0 
>> >>> > for i in 1 : length(features) 
>> >>> > retval += evaluate(features[i]) 
>> >>> > end 
>> >>> > retval 
>> >>> > end 
>> >>> > 
>> >>> > function fast(features::Vector{Feature}) 
>> >>> > retval = 0.0 
>> >>> > for i in 1 : length(features) 
>> >>> > if isa(features[i], A) 
>> >>> > retval += evaluate(features[i]::A) 
>> >>> > else 
>> >>> > retval += evaluate(features[i]::B) 
>> >>> > end 
>> >>> > end 
>> >>> > retval 
>> >>> > end 
>> >>> > 
>> >>> > using ProfileView 
>> >>> > 
>> >>> > features = Feature[] 
>> >>> > for i in 1 : 1 
>> >>> > push!(features, A()) 
>> >>> > end 
>> >>> > 
>> >>> > slow(features) 
>> >>> > @time slow(features) 
>> >>> > fast(features) 
>> 

[julia-users] Re: programatically unquote something

2016-04-01 Thread 'Greg Plowman' via julia-users
I'm not an expert, nor do I understand what you're trying to do.


Presumably you've read: 
http://docs.julialang.org/en/release-0.4/manual/metaprogramming/


I'm also somewhat confused by various ways to create expressions, and their 
relationship to each other:

:(code)

quote
   code
end

Expr(:quote, :(code))


Note that dump() is sometimes useful.

e1 = :(a+1)
e2 = quote a+1 end
e3 = Expr(:quote, :(a+1))

dump(e1)
dump(e2)
dump(e3)

e2.args[2] == e1
e3.args[1] == e1


$ is interpolation within an expression, so that :($expr) == expr
:($e1) == e1 == :(a+1)


Finally, if you quote a symbol you'll get a QuoteNode:

julia> z = :(:a)
:(:a)

julia> dump(z)
QuoteNode
  value: Symbol a

julia> z.value
:a


If you quote an expression, you'll get a Quote expression:

z = :(:(a+1))
dump(z)
z == e3



On Saturday, April 2, 2016 at 11:52:42 AM UTC+11, vis...@stanford.edu wrote:

> > x = :(:x)
> > $x
>
> *ERROR: unsupported or misplaced expression $*
>
>
> Is there a quote/unquote function that mimics the behavior or what happens 
> when you wrap something with :() or do $x? I want to retrieve what's inside 
> the variable x's expression.
>
> It's not sufficient to just wrap in Expr(:quote, ...) since that's not the 
> same as :() and similarly I can't find any documentation on how to unquote 
> something.
>
> There must be some routine being called in order to do the 
> quoting/unquoting - is there a way to access it?
>
>
> Vishesh
>


[julia-users] Re: Nullable{Date}

2016-02-08 Thread 'Greg Plowman' via julia-users
If only Nullables can be null, could we formally define this?

isnull(x::Nullable) = x.isnull # already defined in nullable.jl
isnull(x) = false  # extra definition for everything else



> isnull( lp15 ) --> true
> isnull( lp16 ) -->  MethodError: `isnull` has no method matching isnull( 
> ::Date )
>



[julia-users] Re: Executing anonymous function on worker fails when wrapped in a module

2016-01-20 Thread 'Greg Plowman' via julia-users
OK, it seems a workaround may be to use something like eval(Base.Main, 
:(()->CPU_CORES)) in place of the standard anonymous function.

Below, Lemon.getCores(pid) doesn't work, Apple.getCores(pid) does work.

Is there a better way? This workaround seems awkward and overly complicated.

module Lemon
export getCores
getCores(pid) = remotecall_fetch(pid, ()->CPU_CORES)
end

module Apple
export getCores
getCores(pid) = remotecall_fetch(pid, eval(Base.Main, :(()->CPU_CORES)))
end


Still not sure whether using anonymous function,()->CPU_CORES, is a good 
way to return a global variable from a worker?

I also noted that eval'ing in Main, Base and Base.Main all seemed to work.
I'm not really sure about the differences between these modules. Can 
someone explain?

-- Greg


[julia-users] Executing anonymous function on worker fails when wrapped in a module

2016-01-18 Thread 'Greg Plowman' via julia-users
I'm trying to execute an anonymous function on a worker from within a 
module:

getCores(pid) = remotecall_fetch(pid, ()->CPU_CORES)

module Banana
export getCores2
getCores2(pid) = remotecall_fetch(pid, ()->CPU_CORES)
end



Firstly, is using anonymous function,()->CPU_CORES, as above a good way to 
return a global variable from a worker?

I can execute getCores successfully, but getCores2 fails:
Is it possible to "escape" the anonymous function in some way?


julia> getCores(2)
4

julia> using Banana

julia> getCores2(2)
WARNING: Module Banana not defined on process 2
fatal error on 2: ERROR: UndefVarError: Banana not defined
 in deserialize at serialize.jl:504
 in handle_deserialize at serialize.jl:465
 in deserialize at serialize.jl:696
 in deserialize_datatype at serialize.jl:651
 in handle_deserialize at serialize.jl:465
 in deserialize_expr at serialize.jl:627
 in handle_deserialize at serialize.jl:458
 in deserialize_expr at serialize.jl:627
 in handle_deserialize at serialize.jl:458
 in deserialize_expr at serialize.jl:627
 in handle_deserialize at serialize.jl:458
 in deserialize at serialize.jl:556
 in handle_deserialize at serialize.jl:465
 in deserialize at serialize.jl:538
 in handle_deserialize at serialize.jl:465
 in deserialize at serialize.jl:696
 in deserialize_datatype at serialize.jl:651
 in handle_deserialize at serialize.jl:465
 in message_handler_loop at multi.jl:863
 in process_tcp_streams at multi.jl:852
 in anonymous at task.jl:63
Worker 2 terminated.ERROR: ProcessExitedException()
 in yieldto at task.jl:71
 in wait at task.jl:371
 in wait at task.jl:286
 in wait at channels.jl:93
 in take! at channels.jl:82
 in take! at multi.jl:804
 in remotecall_fetch at multi.jl:730
 in getCores2 at none:3

ERROR (unhandled task failure): EOFError: read end of file
Julia>




[julia-users] Re: good approach to run julia scripts in parallel in a PC

2016-01-13 Thread 'Greg Plowman' via julia-users

You might be able to use pmap: 
http://docs.julialang.org/en/release-0.4/manual/parallel-computing/#parallel-map-and-loops
http://docs.julialang.org/en/release-0.4/stdlib/parallel/?highlight=pmap#Base.pmap

Perhaps something like:

@everywhere function test1(p)
println("doing stuff with $p")
return 2*p
end

function main()
parameters = collect(0.001:0.001:0.6)
results = pmap(test1, parameters)
end

main()


On Wednesday, January 13, 2016 at 1:32:08 AM UTC+11, Charles Santana wrote:
>
> Hi julians,
>
> Parallel computing is not my focus, but I would like to run a  julia 
> script in parallel in my PC in order to save time. The PC has an 8-core 
> processor so I would like to run at least 4 replicates of my script 
> (test1.jl) at the same time, each one with one different parameter. 
>
> So far I was working on a cluster and I was using qsub in order to submit 
> my jobs. I didn't need to configure anything. I had a quota of 20 jobs, so 
> if I launched 200 jobs 20 of them would run and the others would be in a 
> queue waiting for their moment to run. 
>
> Now I would like to run my scripts in my machine and I would like to do 
> something similar. For me it is fine if I wait for the current jobs to 
> finish running in order to launch another bunch of jobs. As well as it is 
> fine if I launch a new job for each experiment that finishes. The easiest 
> option the best :)
>
> My script does not receive any file as input, but it writes the outputs to 
> unique files (the file names are unique for each job). 
>
> Do you recommend a good way to do it in Julia? Let's say my script is 
> called "test1.jl". So far I was trying the following code to call my jobs:
>
> function main() 
> parameters = collect(0.001:0.001:0.6);
> @parallel for(p in parameters)
> command1 = `/home/cdesantana/Downloads/julia -p 4 
> /home/cdesantana/Experiments/test1.jl $p`;
> run(command1);
> end 
> end
>
> main();
>
> I really don't understand what is happening here, but I am sure it is not 
> working 4 jobs of my scripts... :(
>
> cdesantana@c-de-santana:~/Data/Dendritics/poster2016$ ps aux|grep julia
> cdesant+  1964  3.7  1.3 9214696 107356 pts/10 Sl   15:20   0:01 
> /home/cdesantana/Downloads/julia/julia script.jl
> cdesant+  1994 92.2  2.7 9489844 214388 pts/10 Rl   15:20   0:31 
> /home/cdesantana/Downloads/julia/julia -p4 
> /home/cdesantana/Experiments/test1.jl 1 0.001
> cdesant+  2013  8.7  1.5 9189424 120340 ?  Ssl  15:20   0:02 
> /home/cdesantana/Downloads/julia/usr/bin/julia -Cnative 
> -J/home/cdesantana/Downloads/julia/usr/lib/julia/sys.so --bind-to 
> 192.168.89.174 --worker
> cdesant+  2016  9.2  1.6 9288520 131600 ?  Ssl  15:20   0:03 
> /home/cdesantana/Downloads/julia/usr/bin/julia -Cnative 
> -J/home/cdesantana/Downloads/julia/usr/lib/julia/sys.so --bind-to 
> 192.168.89.174 --worker
> cdesant+  2018  6.7  1.6 9353972 131772 ?  Ssl  15:20   0:02 
> /home/cdesantana/Downloads/julia/usr/bin/julia -Cnative 
> -J/home/cdesantana/Downloads/julia/usr/lib/julia/sys.so --bind-to 
> 192.168.89.174 --worker
> cdesant+  2023  7.7  1.6 9419496 131604 ?  Ssl  15:20   0:02 
> /home/cdesantana/Downloads/julia/usr/bin/julia -Cnative 
> -J/home/cdesantana/Downloads/julia/usr/lib/julia/sys.so --bind-to 
> 192.168.89.174 --worker
> cdesant+  2159  0.0  0.0  11716   892 pts/10   S+   15:21   0:00 grep 
> --color=auto julia
>
> I was expecting that it would be running my script 4 times, each one with 
> one different value of $p (0.001, 0.002, 0.003, 0.004). 
>
> Any suggestion? My machine has ubuntu installed so I am fine if I need to 
> combine Linux programs with Julia.
>
> Many thanks in advance for any help! 
>
> Best,
>
> Charles
> -- 
> Um axé! :)
>
> --
> Charles Novaes de Santana, PhD
> https://github.com/cndesantana
>


Re: [julia-users] excessive memory allocation with @printf at compile time

2016-01-08 Thread 'Greg Plowman' via julia-users
This is amazing!
It also speeds up my compilation time by more than x10.
I have a lot of @printf statements and this has changed my life :)

What would be wrong with a macro something like:
macro fastprintf(args...)
:( f()=@printf $(args...); f() )
end

I did a comparision of 50 simple @printf statements:
@printf:11.563024 seconds (6.29 M allocations: 274.179 MB, 2.47% gc 
time)
@fastprintf: 0.709322 seconds (300.52 k allocations: 14.809 MB, 0.85% gc 
time)


Why not update @printf macro to use this method directly?



[julia-users] Re: Why does this code never return?

2016-01-02 Thread 'Greg Plowman' via julia-users
This seemed a little non-obvious to me as well.

I guess the take-away is that "loading" a module (via any means, not just 
reload??) loads the entire *file* containing the module, not just the stuff 
between module Foo end.
Only the stuff between module Foo end is scoped to the module, but the 
entire file is loaded.

Is this correct?


On Sunday, January 3, 2016 at 3:50:58 AM UTC+11, Alexander Ranaldi wrote:

> Hello,
>
> Consider the following code which is in the file Foo.jl
>
> module Foo
>
>
> function bar(x, y)
> x + y
> end
>
>
> end
>
> reload("Foo")
>
>
> Then, at the REPL:
>
> julia> include("Foo.jl")
>
> Julia does not return. Can someone help me understand what is happening 
> here? Is the module being infinitely reloaded?
>
> Thank you
>


Re: [julia-users] What is @d?

2015-12-23 Thread 'Greg Plowman' via julia-users
Hi Eric,

I too am a long suffering Windows user.

I find the Atom editor to be useful here:
>From the menu bar, Find->Find in Project
I typed  "macro d("

1 result found in 1 file for macro d(
v0.4\Lazy\src\macros.jl
241 macro d(xs...)

And here's the macro:

macro d(xs...)
  @cond if VERSION < v"0.4-"
Expr(:typed_dict, :(Any=>Any), map(esc, xs)...)
  else
:(Dict{Any, Any}($(map(x->esc(prockey(x)), xs)...)))
  end
end


-- Greg



[julia-users] Re: python a[range(x), y] equivalent in Julia

2015-12-19 Thread 'Greg Plowman' via julia-users
I'm guessing you want y to specify a column from each row.
Not sure how to do this directly. Closest I can think of this:

a = [ -1 2; 3 -4 ] 
y = [ 1, 2 ]
i = sub2ind(size(a), 1:2, y)
a[i]



Re: [julia-users] Re: Using macros to override lots of operators for a user-defined type

2015-12-16 Thread 'Greg Plowman' via julia-users

On Thursday, December 17, 2015 at 12:12:06 AM UTC+11, Jeffrey Sarnoff wrote:
>
> Useful stuff, Greg.  I would like to see the way you implemented handing 
> copy constructors, unary operators, etc.
> Would you mind collecting them in a gist or posting a link to the file[s]?
>

Jeffrey,
Here's a link: https://gist.github.com/GregPlowman/bfa494e99aaed2fe488c
Not much to it really. For me, one of the more useful is the 
`@CallDefaultConstructor` macro. See the Example.jl

-- Greg




Re: [julia-users] Re: Using macros to override lots of operators for a user-defined type

2015-12-16 Thread 'Greg Plowman' via julia-users
I have exactly the same requirement. 
Additionally, I often have more than 2 fields and also change the fields of 
my custom types.

So I use a slightly more general version of the above:


function CompositeBinaryOp(T::Symbol, op::Symbol)
expressions = [ :($op(x1.$field, x2.$field)) for field in 
fieldnames(eval(T)) ]
body = Expr(:call, T, expressions...)
quote
function $op(x1::$T, x2::$T)
return $body
end
end
end

type M
a
b
end

import Base: +, -, *, /, ^

for T in [:M]
for op in [:+, :-, :*, :/, :^]
#eval(CompositeBinaryOp(T, op))

code = CompositeBinaryOp(T, op)
println(code, "\n")
eval(code)
end
end

 
The advantage here for me is that I can change the fields (number, type 
order etc.) and operators don't need manual updating.

I have similar functions for copy constructors, unary operators etc.


[julia-users] Re: range bug in 0.4.1?

2015-12-06 Thread 'Greg Plowman' via julia-users
What about using integer division with div(), and colon operator to 
construct range?

julia> N = 2^3-1
7

julia> imid = div(N+1,2)
4

julia> imid-2 : imid+2
2:6



[julia-users] Re: pmap - intermingled output from workers on v0.4

2015-11-26 Thread 'Greg Plowman' via julia-users
OK, I've done a little more digging.

It seems that in v0.4, remote workers are started differently. This is my 
understanding:
Only one worker for each host is started directly from the master process.
Additional workers on each host are started from the first worker on that 
host.
Thus output from these additional workers is routed via the first worker on 
the host (rather than directly to master process).
Somehow this causes the intermingled output.

To overcome this, I can start all workers directly from the master process, 
and output is orderly again (as for v0.3).
Presumably, the new v0.4 indirect method was to speed up adding remote 
workers.

Clearly, I don't really understand much of this. And I'm not sure how 
connecting all workers directly to master process affects performance or 
scalability.
Intuitively, it doesn't sound good, but for my purpose it does give more 
readable output.

To help speed up the startup of workers, I can start workers on different 
hosts in parallel (but each worker on host is started serially and directly 
from master process)

@sync begin
for each (host, nworkers) in machines
@async begin
for i = 1:nworkers
addprocs([(host,1)])
end
end
end
end



[julia-users] Re: pmap - intermingled output from workers on v0.4

2015-11-25 Thread 'Greg Plowman' via julia-users

Thanks for your reply.

In my view it is natural, that the order of the "output" (print statements) 
> is intermingled, as the code runs in parallel.


Yes, I agree. But I'd like to make sure we're talking about the same level 
of intermingledness (is this a new word?)
Firstly I don't really understand parallel processing, output streams, 
switching etc.
But when I first starting using Julia for parallel sims (Julia v0.3) I was 
initially surprised that output from each worker was NOT intermingled, in 
the sense that each print statement from a worker was delivered to the 
master process console "atomically". i.e. there were discreet lines on the 
console each wholly from a single worker.
Sure, the order of the lines depended on the speed of the processor, the 
amount of work to do etc.
After a while, I just assumed this was either magic, or there was some kind 
of queuing system with locking or similar.
In any case, I didn't really think about it until I started using Julia 
v0.4 where output lines are sometimes not discrete and sometimes delayed.

Here's an example of output:
 
 ...
 From worker 3:  Completed random trial 69
 From worker 3:  Starting random trial 86 with 100 games
 From worker 5:  Starting random trial 87 with 100 games
 From worker 2:  Completed random trial 70
 From worker 2:  Starting random trial 88 with 100 games
 From worker 27: Starting random trial 89 with 100 games
 From worker 21: Completed random trial  From worker 22: Starting 
random trial 90 with 100 games
 From worker 23: Starting random trial 93 with 100 games
 From worker 21: 81
 From worker 19: Starting random trial 91 with 100 games
 From worker 14: Starting random trial 96 with 100 games
 From worker 4:  Completed random trial 82
 From worker 4:  Starting random trial 98 with 100 games
 From worker 24: Completed random trial  From worker 26: Completed 
random trial 76
 From worker 25: Completed random trial 80
 From worker 24: 85
 From worker 22: Completed random trial 90
 From worker 3:  Completed random trial 86
 From worker 8:  Completed random trial  From worker 9:  Starting 
random trial 94 with 100 games
 From worker 8:  78
 From worker 3:  Starting random trial 99 with 100 games
 From worker 27: Completed random trial  From worker 29: Starting 
random trial 92 with 100 games
 From worker 28: Starting random trial 95 with 100 games
 From worker 27: 89
 From worker 2:  Completed random trial 88
 From worker 2:  Starting random trial 100 with 100 games
 From worker 23: Completed random trial 93
 From worker 29: Completed random trial 92
 From worker 28: Completed random trial 95
 From worker 14: Completed random trial  From worker 16: Completed 
random trial 72
 From worker 15: Completed random trial 75
 From worker 20: Completed random trial 79
 From worker 17: Completed random trial 83
 From worker 18: Completed random trial 84
 From worker 19: Completed random trial 91
 From worker 14: 96
 From worker 4:  Completed random trial 98
 From worker 9:  Completed random trial 94
 From worker 3:  Completed random trial 99
 From worker 10: Completed random trial  From worker 11: Completed 
random trial 65
 From worker 12: Completed random trial 66
 From worker 13: Completed random trial 71
 From worker 10: 77  From worker 11: Starting random trial 97 with 
100 games
 From worker 10:
 From worker 2:  Completed random trial 100
 From worker 5:  Completed random trial  From worker 6:  Completed 
random trial 73
 From worker 7:  Completed random trial 74
 From worker 5:  87
 From worker 11: Completed random trial 97


Again I have no idea how these thing work, but here's code from Julia v0.3 
(multi.jl) 

 if isa(stream, AsyncStream)
let wrker = w
# redirect console output from workers to the client's stdout:
@async begin
while !eof(stream)
line = readline(stream)
print("\tFrom worker $(wrker.id):\t$line")
end
end
end
end


And equivalent code from Julia v0.4:

function redirect_worker_output(ident, stream)
@schedule while !eof(stream)
line = readline(stream)
if startswith(line, "\tFrom worker ")
# STDOUT's of "additional" workers started from an initial 
worker on a host are not available
# on the master directly - they are routed via the initial 
worker's STDOUT.
print(line)
else
print("\tFrom worker $(ident):\t$line")
end
end
end


It seems we've gone from @async to @schedule.
Would this make a difference?



[julia-users] pmap - intermingled output from workers on v0.4

2015-11-23 Thread 'Greg Plowman' via julia-users
Has output from parallel workers changed in Julia v0.4 from v0.3?

I guess that running parallel processes might lead to intermingled output.
However, I have (more or less) the same parallel simulation code using pmap 
running on v0.3 and v0.4.

On v0.3 the output from workers is always orderly.

On v0.4 it's often intermingled between workers.
But moreover, the output sometimes seems delayed, as if it's being buffered 
and not being flushed straight away.

Is there a way I can get the output fro workers written immediately?



[julia-users] Re: pmap - intermingled output from workers on v0.4

2015-11-23 Thread 'Greg Plowman' via julia-users
I should add this problem is only when using *remote* workers. (In my case 
ssh on Windows).

The following code produces intermingled output with multiple workers on 
multiple machines (Julia v0.4)
Output is orderly when using Julia v0.3, or with v0.4 when workers are on 
local machine only.


function Launch()
@everywhere function sim(trial, numIterations)
println("Starting trial $trial")
s = 0.0
for i = 1:numIterations
s += sum(sqrt(rand(10^6)))
end
println("Finished trial $trial")
s
end

numTrials = 100
numIterations = 100
println("Running random simulation: $numTrials trials of $numIterations 
iterations ... ")
results = pmap(sim, 1:numTrials, fill(numIterations, numTrials))
end 




[julia-users] Re: General help with concepts: splatting/slurping, inlining, tuple access, function chaining

2015-11-11 Thread 'Greg Plowman' via julia-users
Thank you so much Stefan and Matt.
This really helps me a lot!
I really appreciate your time.
-- Greg


[julia-users] General help with concepts: splatting/slurping, inlining, tuple access, function chaining

2015-11-10 Thread 'Greg Plowman' via julia-users
I have some naïve and probably stupid questions about writing efficient 
code in Julia.
I almost didn't post this because I'm not sure if this is the right place 
for asking for this type of conceptual help. I do hope it is appropriate. I 
apologise in advance if it's not.

I have been trying to benchmark some code (using @time) but I'm coming up 
against several concepts I would like to understand better, and will no 
doubt help to better guide my hitherto "black box" approach.
These concepts are related in what I'm trying to understand, but 
understanding them separately might also help me.

   - splatting/slurping
   - inlining
   - tuple access
   - function chaining

Not really sure how to present my questions. 
Maybe first read through to *Separate Arguments vs Tuple* where all 
concepts seem to come into play and there are more direct questions.


*Splatting/Slurping*
I have read that splatting/slurping incurs a penalty. 
Is it the splatting or the slurping or both?
I have also read that this is not the case if the function is inlined. Is 
this true?

*Inlining*
How do I know if a function is inlined? 
When is it necessary to explicitly inline? (say with @inline, or 
Exrp(:meta, :inline)) 
Does this guarantee inlining, or is it just a hint to the compiler?
Is the compiler usually smart enough to inline optimally.
Why wouldn't I explicitly inline all my small functions?

*Overhead of accessing tuples compared to separate variables/arguments*
Is there overhead in accessing tuples vs separate arguments?
Is the expression assigned to x slower than expression assigned to y?
I = (1,2,3)
i1 = 1, i2 = 2, i3 = 3
x = I[1]+I[2]+I[3]
y = i1+i2+i3


*"Chained" function calls *(not sure if chained is the right word)
It seems sometimes I have lots of small "convenience" functions, 
effectively chained until a final "working" function is called.
A calls B calls C calls D calls ... calls Z. If A, B, C, ... are small 
(maybe getters, convenience functions etc), is this still efficient?
Is there any penalty for this?
How does this relate to inlining?
Presumably there is no penalty if and only if functions are inlined? Is 
this true?
If so, is there a limit to the number of levels (functions in call chain) 
that can be inlined?
Should I be worried about any of this?


So I guess all this comes together when trying to understand the following:

*Separate Arguments vs Tuple*
I have seen code where there are two forms of a function, one accepts a 
tuple, the other accepts separate arguments.

foo(I::Tuple{Vararg{Integer}) = (some code here) -OR- foo{N}(I::NTuple{N,
Integer}) = (some code here)
foo(I::Integer...) = foo(I)

Is calling foo((1,2,3)) more efficient than calling foo(1,2,3)?
It seems foo(1,2,3) requires a slurp and then calls foo((1,2,3)) anyway. 
Is there overhead in the slurp?
Is there overhead in the extra call, or will inlining remove the overhead?
If so, how do I know if it will be inlined automatically? Do I need to 
explicitly @inline or equivalent? 

Does defining foo as a function of a tuple incur tuple access overhead? 
Would it be faster to define a "primary" function with separate arguments 
and a convenience function with tuple argument?

bar(i1:Integer, i2::Integer, i3::Integer) = (some code here)
bar(I::NTuple{3,Integer) = foo(I...)

Is bar(i1,i2,i3) faster than foo(I) (simply because foo requires tuple 
access I[1], I[2],I[3])
Does calling bar(I) incur splatting overhead?
Is there overhead in the extra call, or will inlining remove the overhead?

- Greg



[julia-users] Re: Windows: Add packages for all users

2015-11-07 Thread 'Greg Plowman' via julia-users


Perhaps set environment variable JULIA_PKGDIR for all users.

 
http://docs.julialang.org/en/release-0.4/stdlib/pkg/#Base.Pkg.dir



Re: [julia-users] Re: Order of multiple for-loops

2015-10-29 Thread 'Greg Plowman' via julia-users
I wouldn't say that storage order was irrelevant, but rather that 
mathematics order is different to Julia's storage order.
If Julia had row-major storage, I suspect order of comprehension loops 
would/should be the same as normal for-loops.
(Presumably order of comprehension loops now are so that they are 
constructed column-major order)

In fact it took me a while to write efficient loops for Julia's 
column-major ordering:

for j = 1:3, i = 1:2
  stuff with A[i,j]
end

At first, this seemed "awkward" but I'm used to it now.

Conversely, comprehension loops are written how I "think", but order is 
reversed for column-major efficiency.



[julia-users] Performance of functions when called indirectly via parameter passing

2015-10-28 Thread 'Greg Plowman' via julia-users
I think I have read that passing around functions in not efficient.
Or maybe this is just anonymous functions?
In any case I want to run some comparison performance tests on many 
functions, so have written a general function to perform tests on functions 
passed in as arguments. See below.

Q1. Is this a valid strategy? Will the passed-in functions run the same as 
when executed with an explicit call?
Q2. In the code below, is there any benefit/downside to running gc() before 
each timing? 
Q3. I understand the first 3 return values from `@timed`, What are the 
other 2 return values? Or where can I read about these?

Are there any other considerations I should be aware of?


function PerformanceTest(f::Function, x::MyType1, y::MyType2, n::Int)
# precompile
@timed f(x, 1)
@timed f(y, 1)

gc() # does this help? or hinder?
(xResult, xElapsed, xAllocated, xGC, xOther) = @timed f(x, n)
gc()
(yResult, yElapsed, yAllocated, yGC, yOther) = @timed f(y, n)


# make sure we get same result
@assert xResult == yResult

@printf("%-30s %15i %15i %8s %12f %12f %8.2f %12i %12i %8.2f\n",
f, xResult, yResult, xResult==yResult, xElapsed, yElapsed, yElapsed/
xElapsed, xAllocated, yAllocated, yAllocated/xAllocated)

yResult
end




[julia-users] Re: Is there a tutorial on how to set up my own Julia cluster?

2015-10-28 Thread 'Greg Plowman' via julia-users
On v0.3 try multiple entries (lines) in machine file, one for each worker.

[julia-users] Re: Performance of functions when called indirectly via parameter passing

2015-10-28 Thread 'Greg Plowman' via julia-users
Thanks Kristoffer.

But for Q2, I'm not sure I want to exclude gc time.
Rather I want gc time to be correctly allocated.
I'm not really sure how gc works, but I thought calling gc() before 
each function call would ensure there are no outstanding gc's that might 
happen during the call.
In other words, If I do:
@timed f1()
@timed f2()

I want to avoid allocations in f1 being gc'd and timed during execution of 
f2.
Maybe I'm just being ridiculously paranoid?





[julia-users] Re: REPL: show( io::IO, x::Array{my_object,1} ) = ... overwrite

2015-09-30 Thread 'Greg Plowman' via julia-users
I think I have a similar question.
I have defined a type (it happens to be a subtype of AbstractArray).
Also I have defined Base.show(io::IO, A::MyArray), so that show(A) works as 
I want.
But the REPL doesn't seem to use show. i.e. typing A at the REPL produces 
default output.
How can I get the REPL to use show. Or is there some other function I need 
to extend for my type?
-- Greg



[julia-users] Re: Julia equivalent to a static member function in C++/MATLAB

2015-09-25 Thread 'Greg Plowman' via julia-users
 

> In Julia, you can do the same thing, it is just spelled differently.  You 
> could do  

   x = zeros(MyMatrix, 3, 4)

where you have defined

 Base.zeros(::Type{MyMatrix}, m, n) = .

Show trimmed content 

Of course in Julia you can do anything you want.
However, is it recommended to redefine the meaning of generic functions?
I would assume zeros(MyMatrix,3,4) creates a 3x4 Array{MyMatrix,2}



Re: [julia-users] Juno stopped working - error message

2015-09-23 Thread 'Greg Plowman' via julia-users
Hi,

I'm aware that there are problems with Juno at the moment (I've had my 
share of problems).
I'm not really expecting a resolution to the following issue, but 
rather reporting it in case it is useful or important.
The message says to submit a bug report and seems to be coming from Julia 
core.
This message has been appearing sporadically after the initial error 
message "symbol could not be found jl_generating_output (-1): no error", 
which occurs immediately after Juno startup while/after connecting to 
Julia. 

-- Greg

symbol could not be found jl_generating_output (-1): no error

Please submit a bug report with steps to reproduce this fault, and any 
error messages that follow (in their entirety). Thanks.
Exception: EXCEPTION_ACCESS_VIOLATION at 0x6bf5d93d -- jl_static_show at C:\
Program Files\Julia-0.3.11\bin\libjulia.dll (unknown line)
jl_static_show at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
jl_static_show at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
jl_static_show at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
jl_static_show at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
gdbbacktrace at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown line
)
jl_throw at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown line)
jl_type_error_rt at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
jl_apply_type_ at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
jl_apply_type at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
jl_get_backtrace at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
_start at client.jl:403
jlcall__start_364 at  (unknown line)
jl_apply_generic at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
unknown function (ip: 4200686)
julia_trampoline at C:\Program Files\Julia-0.3.11\bin\libjulia.dll (unknown 
line)
unknown function (ip: 4202914)
unknown function (ip: 4199436)
unknown function (ip: 4199739)
BaseThreadInitThunk at C:\Windows\system32\KERNEL32.DLL (unknown line)
RtlUserThreadStart at C:\Windows\SYSTEM32\ntdll.dll (unknown line)
RtlUserThreadStart at C:\Windows\SYSTEM32\ntdll.dll (unknown line)





[julia-users] Re: When does colon indexing get evaluated / converted?

2015-09-22 Thread 'Greg Plowman' via julia-users
Matt,
Thankyou so much for your great reply.
And yes it does make sense now after such a helpful and well-targeted 
explanation.
You explained "colon-lowering" in a "explanation-lowered" way that a 
non-expert can understand.
This is refreshingly thoughtful and helpful.
Thanks again.
-- Greg

 
On Tuesday, September 22, 2015 at 11:09:41 PM UTC+10, Matt Bauman wrote:

> A colon by itself is simply a synonym for `Colon()`, both on 0.3 and 0.4:
>  
> julia> :
> Colon()
>  
> You can use colons in normal function calls and dispatch in both 0.3 and 
> 0.4:
>  
> julia> f(x::Colon) = x
>f(:)
> Colon()
>  
> That's what's happening with that sub definition.  `sub` acts like 
> indexing, but it's just a normal function call.  And so the Colon() passes 
> through unchanged in both 0.3 and 0.4.
>  
> Indexing expressions are different.  There are some special lowering steps 
> to handle colon (just in 0.3) and end (in both 0.3 and 0.4).  "Lowering" is 
> an intermediate step between parsing and compilation, and you can see what 
> happens to an expression during lowering by calling `expand`.  The expanded 
> expressions have some funny things in them (`top(endof)(A)` is a special 
> internal pseudo-syntax that is kinda-sorta like saying `Base.endof(A)`), 
> but you can see how the `end` keyword lowers to getindex calls in both 0.3 
> and 0.4:
>  
> julia> expand(:(A[end]))
> :(getindex(A,(top(endof))(A)))
>  
> julia> expand(:(A[1, end]))
> :(getindex(A,1,(top(trailingsize))(A,2)))
>  
> julia> expand(:(A[1, end, 3]))
> :(getindex(A,1,(top(size))(A,2),3))
>  
> So, now to the difference between 0.3 and 0.4: the special lowering step 
> for colons in indexing expressions was removed.
>  
> # Julia 0.3:
> julia> expand(:(A[:]))
> :(getindex(A,colon(1,(top(endof))(A
>  
> # Julia 0.4:
> julia> expand(:(A[:]))
> :(getindex(A,:))
>  
> You can see that in 0.3, it effectively expanded to `1:end`.  But 0.4 just 
> left the `:` symbol as it was, and it just gets evaluated normally as an 
> argument to getindex.  It's no longer special. So in 0.4, you can 
> specialize dispatch on Colon indices.
>  
> Make sense?
>  
> On Tuesday, September 22, 2015 at 1:07:01 AM UTC-4, Greg Plowman wrote:
>>
>> Hi All,
>> Thanks all for your replies.
>>  
>> OK I can see this will be much easier in v0.4.
>> I will revisit when v0.4 released.
>>  
>> I'm still curious about colon and end
>>
>>> Colon lowering changed in 0.4, 
>>
>>  
>> Matt, could you expand on this? How/when is this done in v0.3 vs v0.4?
>>  
>> Does this mean v0.3 code attempting to dispatch on Colon type cannot 
>> work? (for example the code from subarray.jl quoted below) 
>>  
>> sub(A::AbstractArray, I::Union(RangeIndex, Colon)...) = sub(A, ntuple(
>> length(I), i-> isa(I[i], Colon) ? (1:size(A,i)) : I[i])...)
>>  
>> I noticed that  OffsetArrays (https://github.com/alsam/OffsetArrays.jl 
>> 
>>  
>> jA 
>> )
>>  
>> defines const (..) = Colon(), presumably to use .. in place of :
>>  
>> Will it work in v0.4?
>> How and when is end converted?
>>  
>> Thanks again,
>> Greg
>>  
>>  
>>  
>>
>

[julia-users] Re: When does colon indexing get evaluated / converted?

2015-09-21 Thread 'Greg Plowman' via julia-users
Hi All,
Thanks all for your replies.

OK I can see this will be much easier in v0.4.
I will revisit when v0.4 released.

I'm still curious about colon and end

> Colon lowering changed in 0.4, 


Matt, could you expand on this? How/when is this done in v0.3 vs v0.4?

Does this mean v0.3 code attempting to dispatch on Colon type cannot work? 
(for example the code from subarray.jl quoted below) 

sub(A::AbstractArray, I::Union(RangeIndex, Colon)...) = sub(A, ntuple(length
(I), i-> isa(I[i], Colon) ? (1:size(A,i)) : I[i])...)

I noticed that  OffsetArrays (https://github.com/alsam/OffsetArrays.jl 

 
jA 
)
 
defines const (..) = Colon(), presumably to use .. in place of :

Will it work in v0.4?
How and when is end converted?

Thanks again,
Greg





[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread 'Greg Plowman' via julia-users
Hi,

I tried Pkg.pin("JuliaParser", v"0.1.2") but now I get the following error 
(multiple times).

Before this JuliaParser was at version v0.6.3, are you sure we should try 
reverting to v0.1.2?


WARNING: LightTable.jl: `skipws` has no method matching skipws(::TokenStream
)
 in scopes at C:\Users\Greg\.julia\v0.3\Jewel\src\parse\scope.jl:148
 in codemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\parse/parse.jl:141
 in filemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\module.jl:93
 in anonymous at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable\misc.jl:5
 in handlecmd at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.
jl:65
 in handlenext at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.
jl:81
 in server at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:
22
 in server at C:\Users\Greg\.julia\v0.3\Jewel\src\Jewel.jl:18
 in include at boot.jl:245
 in include_from_node1 at loading.jl:128
 in process_options at client.jl:285
 in _start at client.jl:354



Any other suggestions?

--Greg


On Sunday, September 20, 2015 at 6:16:10 PM UTC+10, Michael Hatherly wrote:

> The type cannot be constructed error should be fixed on 0.3 by 
> https://github.com/jakebolewski/JuliaParser.jl/pull/25. In the mean time 
> you could Pkg.pin("JuliaParser", v"0.1.2") and see if that fixes the 
> problem on Julia 0.3. (Or a version earlier than v"0.1.2" if needed.)
>
> I’ve come across the cannot resize array with shared data error a while 
> ago with the Atom-based Juno. It was fixed by Pkg.checkouting all the 
> involved packages. Might be the same for the LightTable-base Juno, worth a 
> try maybe.
>
> — Mike
> On Saturday, 19 September 2015 19:09:22 UTC+2, Serge Santos wrote:
>>
>> I tried to solve the problem by running Julia 0.4.0-rc2 instead of Julia 
>> 0.3.11. I manage to execute a few commands in Juno, but juno/julia is stuck 
>> as before. The error message is slightly different though:
>>
>>
>>- 
>>
>>WARNING: LightTable.jl: cannot resize array with shared data
>> in push! at array.jl:430
>> in read_operator at 
>> C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:368
>> in next_token at C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:752
>> in qualifiedname at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:59
>> in nexttoken at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:78
>> in nextscope! at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:116
>> in scopes at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:149
>> [inlined code] from C:\Users\Serge\.julia\v0.4\Lazy\src\macros.jl:141
>> in codemodule at C:\Users\Serge\.julia\v0.4\Jewel\src\parse/parse.jl:8
>> in getmodule at C:\Users\Serge\.julia\v0.4\Jewel\src\eval.jl:42
>> in anonymous at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable\eval.jl:51
>> in handlecmd at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:65
>> in handlenext at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:81
>> in server at 
>> C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:22
>> in server at C:\Users\Serge\.julia\v0.4\Jewel\src\Jewel.jl:18
>> in include at boot.jl:261
>> in include_from_node1 at loading.jl:304
>> in process_options at client.jl:308
>> in _start at client.jl:411
>>
>>
>>
>> On Saturday, 19 September 2015 10:40:49 UTC+1, JKPie wrote:
>>>
>>> I have the same problem, I have spent couple of hours reinstalling Julia 
>>> and Juno on Windows and Linux with no result. The code works fine, when I 
>>> call it from command line directly. 
>>> Please help it is freezing my work :/
>>> J
>>>
>>

[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread 'Greg Plowman' via julia-users
Hi All,

On 2 different PCs where Juno works (almost without error) Pkg.status() 
reports JuliaParser v0.6.2
On PC that has Juno errors, Pkg.status() reports JuliaParser v0.6.3
Rolling back to JuliaParser v0.1.2 creates different errors.
So it seems we need to revert to JuliaParser v0.6.2

I'm not at a PC where I can see if we can pin v0.6.2, in light of the 
following:

Before this JuliaParser was at version v0.6.3, are you sure we should try 
>> reverting to v0.1.2?
>
> See the tagged versions 

https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the next 
> latest tagged version. You could probably checkout a specific commit prior 
> to the commit that’s causing the breakage instead though.

Also, I don't want to play around with Pkg.ANYTHING on a working 
configuration at the moment :)

-- Greg


On Monday, September 21, 2015 at 4:50:37 AM UTC+10, Tony Kelman wrote:

> What's temporarily broken here is some of the packages that 
> Light-Table-based Juno relies on to work. In the meantime you can still use 
> command-line REPL Julia, and while it's not the most friendly interface 
> your code will still run. Your estimation of the Julia ecosystem's 
> robustness is pretty accurate though, if you really want to ensure things 
> stay working the best way of doing that right now is keeping all packages 
> pinned until you have a chance to thoroughly test the versions that an 
> upgrade would give you. We plan on automating some of this testing going 
> forward, though in the case of Juno much of the code is being replaced 
> right now and the replacements aren't totally ready just yet.
>
>
> On Sunday, September 20, 2015 at 10:45:01 AM UTC-7, Serge Santos wrote:
>>
>> Thank you all for your inputs. It tried your suggestions and, 
>> unfortunately, it does not work. I tried Atom but, after a good start and 
>> some success, it keeps crashing in middle of a calculation (windows 10).
>>
>> To summarize what I tried with Juno and julia 0.3.11:
>> - Compat v.0.7.0 (pinned)
>> - JuliaParser V0.1.2  (pinned)
>> - Jewel v1.0.6.
>>
>> I get a first error message, which seems to indicate that Julia cannot 
>> generate an output.
>>
>> *symbol could not be found jl_generating_output (-1): The specified 
>> procedure could not be found.*
>>
>> Followed by:
>>
>> *WARNING: LightTable.jl: `skipws` has no method matching 
>> skipws(::TokenStream)*
>> * in scopes at C:\Users\Serge\.julia\v0.3\Jewel\src\parse\scope.jl:148*
>> * in codemodule at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\parse/parse.jl:141*
>> * in getmodule at C:\Users\Serge\.julia\v0.3\Jewel\src\eval.jl:42*
>> * in anonymous at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable\eval.jl:51*
>> * in handlecmd at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:65*
>> * in handlenext at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:81*
>> * in server at 
>> C:\Users\Serge\.julia\v0.3\Jewel\src\LightTable/LightTable.jl:22*
>> * in server at C:\Users\Serge\.julia\v0.3\Jewel\src\Jewel.jl:18*
>> * in include at boot.jl:245*
>> * in include_from_node1 at loading.jl:128*
>> * in process_options at client.jl:285*
>> * in _start at client.jl:354*
>>
>> Looking at dependencies with MetadataTools, I smply got:* nothing.* I 
>> assume that Jewel does not have any dependencies.
>>
>> I have a lot of understanding for the effort that goes into making the 
>> Julia project work and a success, but I just lost two days of my life 
>> trying to make things work and I have an important deadline ahead that I am 
>> likely to miss because I relied on a promising tool that, unfortunately, 
>> does not seem robust enough at this stage given the amount of development 
>> happening.It makes me wonder if I should not wait until Julia becomes more 
>> established and robust and switch to other solutions.
>>
>> On Sunday, 20 September 2015 16:47:05 UTC+1, Dongning Guo wrote:
>>>
>>> In case you're stuck, this may be a way out:
>>> I installed Atom editor and it seems Julia (v0.5??? nightly build)
>>> works with it after installing a few packages.  I'm learning to use
>>> the new environment ...
>>> See https://github.com/JunoLab/atom-julia-client/tree/master/manual
>>>
>>>
>>> On Sunday, September 20, 2015 at 6:59:02 AM UTC-5, Michael Hatherly 
>>> wrote:

 I can’t see LightTable listed in Pkg.status() output in either PC

 The LightTable module is part of the Jewel package is seems, 
 https://github.com/one-more-minute/Jewel.jl/blob/fb854b0a64047ee642773c0aa824993714ee7f56/src/Jewel.jl#L22,
  
 and so won’t show up on Pkg.status() output since it’s not a true 
 package by itself. Apologies for the misleading directions there.

 What other packages would Juno depend on?

 You can manually walk through the REQUIRE files to see what Jewel 
 depends on, or use MetadataTools to do it:

 julia> using MetadataTools
 julia> pkgmeta = get_all_pkg();
 julia> graph = 

[julia-users] When does colon indexing get evaluated / converted?

2015-09-20 Thread 'Greg Plowman' via julia-users
Hi,

I'm trying to define a custom Array type that can be indexed using 
arbitrary ranges.

e.g. A = MyArray(Int, 3:8) would define a 6-element vector with indexes 
ranging from 3 to 8, rather than the default 1 to 6.

I've made some progress, but am now stuck on how to handle colon indexing.

A[4:6] works by defining appropriate getindex and setindex!

e.g.  setindex!{T,S<:Real}(A::MyArray{T,1}, value, I::AbstractVector{S}) = 
...

but A[:] = 0 seems to get translated to A[1:6] before dispatch on setindex!, 
so I can't hijack the call.

>From subarray.jl, the code below suggests I can specialise on the Colon 
type, but this doesn't seem to work for me. Colon appears to be converted 
to UnitRange *before* calling setindex!

sub(A::AbstractArray, I::Union(RangeIndex, Colon)...) = sub(A, ntuple(length
(I), i-> isa(I[i], Colon) ? (1:size(A,i)) : I[i])...)


Is there a way around this?
Should I be able to specialise on the colon argument?

-- Greg


[julia-users] Re: Juno stopped working - error message

2015-09-20 Thread 'Greg Plowman' via julia-users
OK I see that second latest tag is v0.1.2 (17 June 2014). Seems a strange 
jump.

But now I understand pinning, I can use a strategy of rolling back 
Juno-related packages until Juno works again.

What other packages would Juno depend on?

To help me in this endeavour, I have access to another PC on which Juno 
runs (almost) without error.
Confusingly, Pkg.status() reports JuliaParser v0.6.2 on this second PC
Jewel is v1.0.6 on both PCs.
I can't see LightTable listed in Pkg.status() output in either PC

I think Compat v0.7.2 is also causing ERROR: @doc not defined issue (
https://groups.google.com/forum/#!topic/julia-users/rsM4hxdkAxg)
so maybe reverting back to Compat v0.7.0 might also help.

-- Greg


On Sunday, September 20, 2015 at 7:08:47 PM UTC+10, Michael Hatherly wrote:

> Before this JuliaParser was at version v0.6.3, are you sure we should try 
> reverting to v0.1.2?
>
> See the tagged versions 
> https://github.com/jakebolewski/JuliaParser.jl/releases. So that’s the 
> next latest tagged version. You could probably checkout a specific commit 
> prior to the commit that’s causing the breakage instead though.
>
> What version of Jewel.jl and LightTable.jl are you using?
>
> — Mike
> ​
>
> On Sunday, 20 September 2015 10:56:22 UTC+2, Greg Plowman wrote:
>>
>> Hi,
>>
>> I tried Pkg.pin("JuliaParser", v"0.1.2") but now I get the following 
>> error (multiple times).
>>
>> Before this JuliaParser was at version v0.6.3, are you sure we should try 
>> reverting to v0.1.2?
>>
>>
>> WARNING: LightTable.jl: `skipws` has no method matching skipws(::
>> TokenStream)
>>  in scopes at C:\Users\Greg\.julia\v0.3\Jewel\src\parse\scope.jl:148
>>  in codemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\parse/parse.jl:141
>>  in filemodule at C:\Users\Greg\.julia\v0.3\Jewel\src\module.jl:93
>>  in anonymous at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable\misc.jl:5
>>  in handlecmd at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/
>> LightTable.jl:65
>>  in handlenext at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/
>> LightTable.jl:81
>>  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\LightTable/LightTable.
>> jl:22
>>  in server at C:\Users\Greg\.julia\v0.3\Jewel\src\Jewel.jl:18
>>  in include at boot.jl:245
>>  in include_from_node1 at loading.jl:128
>>  in process_options at client.jl:285
>>  in _start at client.jl:354
>>
>>
>>
>> Any other suggestions?
>>
>> --Greg
>>
>>
>> On Sunday, September 20, 2015 at 6:16:10 PM UTC+10, Michael Hatherly 
>> wrote:
>>
>>> The type cannot be constructed error should be fixed on 0.3 by 
>>> https://github.com/jakebolewski/JuliaParser.jl/pull/25. In the mean 
>>> time you could Pkg.pin("JuliaParser", v"0.1.2") and see if that fixes 
>>> the problem on Julia 0.3. (Or a version earlier than v"0.1.2" if 
>>> needed.)
>>>
>>> I’ve come across the cannot resize array with shared data error a while 
>>> ago with the Atom-based Juno. It was fixed by Pkg.checkouting all the 
>>> involved packages. Might be the same for the LightTable-base Juno, worth a 
>>> try maybe.
>>>
>>> — Mike
>>> On Saturday, 19 September 2015 19:09:22 UTC+2, Serge Santos wrote:

 I tried to solve the problem by running Julia 0.4.0-rc2 instead of 
 Julia 0.3.11. I manage to execute a few commands in Juno, but juno/julia 
 is 
 stuck as before. The error message is slightly different though:


- 

WARNING: LightTable.jl: cannot resize array with shared data
 in push! at array.jl:430
 in read_operator at 
 C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:368
 in next_token at 
 C:\Users\Serge\.julia\v0.4\JuliaParser\src\lexer.jl:752
 in qualifiedname at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:59
 in nexttoken at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:78
 in nextscope! at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:116
 in scopes at C:\Users\Serge\.julia\v0.4\Jewel\src\parse\scope.jl:149
 [inlined code] from C:\Users\Serge\.julia\v0.4\Lazy\src\macros.jl:141
 in codemodule at C:\Users\Serge\.julia\v0.4\Jewel\src\parse/parse.jl:8
 in getmodule at C:\Users\Serge\.julia\v0.4\Jewel\src\eval.jl:42
 in anonymous at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable\eval.jl:51
 in handlecmd at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:65
 in handlenext at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:81
 in server at 
 C:\Users\Serge\.julia\v0.4\Jewel\src\LightTable/LightTable.jl:22
 in server at C:\Users\Serge\.julia\v0.4\Jewel\src\Jewel.jl:18
 in include at boot.jl:261
 in include_from_node1 at loading.jl:304
 in process_options at client.jl:308
 in _start at client.jl:411



 On Saturday, 19 September 2015 10:40:49 UTC+1, JKPie wrote:
>
> I have the same problem, I have 

Re: [julia-users] When does colon indexing get evaluated / converted?

2015-09-20 Thread 'Greg Plowman' via julia-users
To further clarify, I thought I could specialise getindex / setindex! on colon 
type argument.

see below, getindex2 is being called, but getindex is not being called.
A[:] calls getindex(A::AbstractArray{T,N},I::AbstractArray{T,N}) at 
abstractarray.jl:380
presumably after [:] has been converted to [1:8]



julia> getindex(A::FArray, ::Colon) = A[start(A) : endof(A)]
getindex (generic function with 177 methods)

julia> getindex2(A::FArray, ::Colon) = A[start(A) : endof(A)]
getindex2 (generic function with 1 method)

julia> A = FZeros(Int, -2:8)
11-element FArray{Int64,1} (-2:8,)

julia> A[:]
8-element Array{Int64,1}:
 0
 0
 0
 0
 0
 0
 0
 0

julia> @which A[:]
getindex(A::AbstractArray{T,N},I::AbstractArray{T,N}) at 
abstractarray.jl:380

julia> getindex2(A, :)
11-element Array{Int64,1}:
 0
 0
 0
 0
 0
 0
 0
 0
 0
 0
 0

julia>




Re: [julia-users] When does colon indexing get evaluated / converted?

2015-09-20 Thread 'Greg Plowman' via julia-users
Hi Spencer,

Thanks for your reply.

Actually I'm trying to follow the Interfaces chapter.

I'm using Julia v0.3, so that might be a problem.

Also I think my problem is mainly for vectors (1-dimensional array), 
because linear/subscript index has same syntax.

For multi-dimensional arrays, I can map subscript indexes to the default 
1:length linear indexes.

A = MyArray(Int, -2:8, -2:6) # defines 11x9 Matrix
A[:] gets translated to A[1:99], which can be handled by linear indexing

B = MyArray(Int, -2:8) # defines 11-element Vector
B[:] gets translated to B[1:8], I need it to be translated to B[-2:8]

So I'm trying to find out where this conversion happens, so I can redefine 
it.

-- Greg

PS Not really sure what to do about email name. Name on julia-users shows:
me 
 (Greg Plowman change <javascript:;>)   
 


On Monday, September 21, 2015 at 1:15:42 PM UTC+10, Spencer Russell wrote:

> Hi Greg,
>
> This doesn’t answer your question directly, but I recommend you check out 
> the Interfaces 
> <http://docs.julialang.org/en/latest/manual/interfaces/#indexing> chapter 
> of the manual for some good info on creating your own array type. To get 
> indexing working the most important parts are:
>
> 1. subtype AbstractArray
> 2. implement the Base.linearindexing(Type) method
> 3. implement either getindex(A, i::Int) or 
> getindex(A, i1::Int, ..., iN::Int), (and the setindex! versions)  depending 
> on step 2.
>
> Then the various indexing behaviors (ranges, etc.) should work for your 
> type, as under-the-hood they boil down to the more basic indexing method 
> that you define in step 3. One thing to note that’s not spelled out super 
> clearly in the manual yet is that if you want the results of range indexing 
> to be wrapped in your type, you should also implement `similar`. There’s 
> some more details on that in this pr 
> <https://github.com/JuliaLang/julia/pull/13212/files>.
>
> OT: something about the way your email client is configured is causing you 
> to show up as julia...@googlegroups.com  where for other 
> users I see their names (at least in my email client), so it’s a bit hard 
> to see who you are in lists of thread participant names.
>
> -s
>
>
>
>
> On Sep 20, 2015, at 9:34 PM, 'Greg Plowman' via julia-users <
> julia...@googlegroups.com > wrote:
>
> Hi,
>
> I'm trying to define a custom Array type that can be indexed using 
> arbitrary ranges.
>
> e.g. A = MyArray(Int, 3:8) would define a 6-element vector with indexes 
> ranging from 3 to 8, rather than the default 1 to 6.
>
> I've made some progress, but am now stuck on how to handle colon indexing.
>
> A[4:6] works by defining appropriate getindex and setindex!
>
> e.g.  setindex!{T,S<:Real}(A::MyArray{T,1}, value, I::AbstractVector{S}) 
> = ...
>
> but A[:] = 0 seems to get translated to A[1:6] before dispatch on 
> setindex!, so I can't hijack the call.
>
> From subarray.jl, the code below suggests I can specialise on the Colon 
> type, but this doesn't seem to work for me. Colon appears to be converted 
> to UnitRange *before* calling setindex!
>
> sub(A::AbstractArray, I::Union(RangeIndex, Colon)...) = sub(A, ntuple(
> length(I), i-> isa(I[i], Colon) ? (1:size(A,i)) : I[i])...)
>
>
> Is there a way around this?
> Should I be able to specialise on the colon argument?
>
> -- Greg
>
>
>

Re: [julia-users] Adding remote workers on windows

2015-09-08 Thread 'Greg Plowman' via julia-users

>
> I meant the remote machine/network may be firewalled to only accept 
> incoming ssh, http and other known ports.

  
OK sorry. By now you can probably guess I don't really understand 
networking.

Anyway I turned off the remote firewall entirely, and addprocs() 
successfully added remote worker, Yippee!
Then I re-enabled the remote firewall and confirmed ssh server was enabled 
through the firewall. But now addprocs() failed with wait error as before.
Definitely something to do with firewall blocking but seemingly not ssh 
server. What to do?
After much playing around, and for some unknown but inspired reason (well, 
for me anyway),  I decided to add Julia to allowed "apps" and Presto it 
worked!
Not really sure why Julia wasn't in already in allowed list. I didn't need 
do this on another network where it just worked out of the box.
 
In any case, for completeness and so it might help anyone else, here's what 
I did:

Control Panel
Windows Firewall
Allow an app or feature through Windows Firewall
(since Julia was not already in list)
Allow another app
Select Julia from list of applications, and click Add.

 
 
On Tuesday, September 8, 2015 at 1:27:44 PM UTC+10, Amit Murthy wrote:

> I meant the remote machine/network may be firewalled to only accept 
> incoming ssh, http and other known ports.
>  
> On Tue, Sep 8, 2015 at 5:49 AM, greg_plowman via julia-users <
> julia...@googlegroups.com > wrote:
>  
>
>> Is port 9009 open on the remote machine? You could try with "tunnel=true" 
>>> if it is not open.
>>
>>  
>> I think so.
>> After running addprocs() and before the wait error, netstat on the 
>> remote machine outputs the following:
>>  
>> C:\Users\Greg>netstat -an
>> Active Connections
>>   Proto  Local Address  Foreign AddressState
>>   TCP0.0.0.0:22 0.0.0.0:0  LISTENING
>>   TCP0.0.0.0:1350.0.0.0:0  LISTENING
>>   TCP0.0.0.0:4450.0.0.0:0  LISTENING
>>   TCP0.0.0.0:5540.0.0.0:0  LISTENING
>>   TCP0.0.0.0:2869   0.0.0.0:0  LISTENING
>>   TCP0.0.0.0:3389   0.0.0.0:0  LISTENING
>>   TCP0.0.0.0:5357   0.0.0.0:0  LISTENING
>>   TCP0.0.0.0:8092   0.0.0.0:0  LISTENING
>>   TCP0.0.0.0:9009   0.0.0.0:0  LISTENING
>>   TCP0.0.0.0:10243  0.0.0.0:0  LISTENING
>>   TCP0.0.0.0:26143  0.0.0.0:0  LISTENING
>>   TCP0.0.0.0:47984  0.0.0.0:0  LISTENING
>> ...
>>  
>> When the remote session terminates, the 9009 entry is missing from 
>> netstat output.
>>  
>> On Monday, September 7, 2015 at 9:24:38 PM UTC+10, Amit Murthy wrote:
>>
>>> Is port 9009 open on the remote machine? You could try with 
>>> "tunnel=true" if it is not open.
>>>  
>>> On Mon, Sep 7, 2015 at 4:32 PM, Greg Plowman  
>>> wrote:
>>>  
>>>
 Hi, 
  
 I'm trying to use addprocs() to add remote workers on another windows 
 machine.
  
 I'm using a ssh server for windows (Bitvise) with a modified Cluster 
 Manager, and have successfully used this method in another environment.
 So I know that it works, although one difference is Window 7 (works) vs 
 Windows 8.1 (does not work), but I don't think this should be problem.
  
 Now, I don't expect anyone to troubleshoot my particular setup / 
 environment / customisation.
 Rather I was hoping for some high level help with further diagnosis.
  
 I can confirm that the windows command to launch the remote worker is 
 executed, and the remote machine receives a connection and then successful 
 login.
 The remote ssh server shows a successful connection and login, and 
 windows Task Manager shows a Julia process has started.  
 Then the following error occurs on the local machine, after which the 
 remote session is terminated.
  
 Error evaluating c:\Users\Greg\Julia6\src\Launcher.jl:
 connect: connection timed out (ETIMEDOUT)
  in wait at task.jl:284
  in wait at task.jl:194
  in stream_wait at stream.jl:263
  in wait_connected at stream.jl:301
  in Worker at multi.jl:113
  in create_worker at multi.jl:1064
  in start_cluster_workers at multi.jl:1028
  
  
 I guess my first question is which side (local or remote) is failing.
 It seems to me that the local Julia process is waiting for some 
 confirmation of connection? Does that sound right?
 If so, are there any suggestions on how to further diagnose problem.
  
 When the ssh command to start a remote Julia worker is executed 
 from the windows command line, I get the following:
 julia_worker:9009#192.168.1.107
  
 Then after about 60s:
 Master process (id 1) could not connect within 60.0 seconds.
 exiting.
  
 Presumably this is the