[julia-users] How do I add a column to a subdataframe?

2014-12-11 Thread Tony Fong

In a vanilla dataframe, I can do this
df[ :a ] = mydarray

It doesn't seem to allow me to do the same for a subdataframe returned by a 
groupby.

I'm trying to compute a new column based on the subdataframe, attach this 
column to it and then do further groupby.

Is there a way to do that?

Tony


[julia-users] Re: Poor parallel performance

2014-12-11 Thread Dejan Miljkovic
For solution take look at stackoverflow thread
http://stackoverflow.com/questions/27396024/poor-parallel-performance

In essence solution is to use global variables (@everywhere) to pass graph 
data to processes.

Dejan

On Wednesday, December 10, 2014 10:58:41 AM UTC-8, Dejan Miljkovic wrote:
>
> I am getting performance degradation after parallelizing the code that is 
> calculating graph centrality. Graph is relatively large, 100K vertices. 
> Single threaded application take approximately 7 minutes. As recommended on 
> julialang site (
> http://julia.readthedocs.org/en/latest/manual/parallel-computing/#man-parallel-computing)
>  
> I adapted code and used pmap api in order to parallelize calculations. I 
> started calculation with 8 processes (julia -p 8 test_parallel_pmap). To 
> my surprise I got 10 fold slow down. Parallel process now take more than 
> hour. I noticed that it take several minutes for parallel process to 
> initialize and starts calculation. Even after all 8 cpus are %100 busy with 
> julia app, calculation is super slow.
>
> Attached is julia code:
>
> 1) test_parallel_pmap.jl reads grapg from file and starts parallel 
> calculation. 
>
> 2) centrality_mean.jl calculatse centrality. Code is based on 
> https://gist.github.com/SirVer/3353761
>
>
> Any suggestion how to improve parallel performance is greatly appreciated. 
>
> Thanks,
>
> Dejan
>
>
>
>

Re: [julia-users] Weird timing issue

2014-12-11 Thread Sean McBane
Thanks Iain! I knew I'd seen something like *findnz* in the documentation 
somewhere but I couldn't find it again...

-- Sean

On Thursday, December 11, 2014 11:38:24 PM UTC-6, Iain Dunning wrote:
>
> Or even simpler:
>
> coords(A::SparseMatrixCSC) = collect(zip(findn(A)...))
>
> On Fri, Dec 12, 2014 at 12:22 AM, Iain Dunning  > wrote:
>>
>> Also, you can pretty much just do this with inbuilt functions:
>>
>> function coord(A::SparseMatrixCSC)
>> rows, cols, vals = findnz(A)
>> collect(zip(rows,cols))
>> end
>>
>>
>> On Friday, December 12, 2014 12:14:22 AM UTC-5, Iain Dunning wrote:
>>>
>>> I'm imagine its something like the following pattern:
>>>
>>> Run 1: generate X garbage
>>> Run 2: generate X garbage, for total 2X garbage, which is over 
>>> threshold, reduce back to 0
>>> Run 3: generate X garbage
>>> Run 4: generate X garbage, for total 2X garbage, which is over 
>>> threshold, reduce back to 0
>>> and so on
>>>
>>> On Friday, December 12, 2014 12:09:19 AM UTC-5, Sean McBane wrote:

 Alright. I am curious now as to what causes this behavior; hopefully 
 someone will offer an explanation.

 I'll be sure to from now on.

 -- Sean

 On Thursday, December 11, 2014 11:07:07 PM UTC-6, John Myles White 
 wrote:
>
> This is just how the GC works. Someone who's done more work on the GC 
> can give you more context about why the GC runs for the length of time it 
> runs for at each specific moment that it starts going. 
>
> As a favor to me, can you please make sure that you quote the entire 
> e-mail thread you're responding to? I find responding to e-mails without 
> context to be pretty jarring. 
>
>  -- John 
>
> On Dec 12, 2014, at 12:04 AM, Sean McBane  wrote: 
>
> > Right, I know I'm allocating it and discarding memory. However, if 
> the GC cleans up at deterministic points in time, as you point out in 
> your 
> first reply, why is timing erratic? And why the regular pattern in 
> timing? 
> It's always faster one call, slower one call, faster one call, slower one 
> call... 
>
>
>
> -- 
> *Iain Dunning*
> PhD Candidate 
>  / MIT 
> Operations Research Center 
> http://iaindunning.com  /  http://juliaopt.org
>  


Re: [julia-users] Weird timing issue

2014-12-11 Thread Iain Dunning
Or even simpler:

coords(A::SparseMatrixCSC) = collect(zip(findn(A)...))

On Fri, Dec 12, 2014 at 12:22 AM, Iain Dunning 
wrote:
>
> Also, you can pretty much just do this with inbuilt functions:
>
> function coord(A::SparseMatrixCSC)
> rows, cols, vals = findnz(A)
> collect(zip(rows,cols))
> end
>
>
> On Friday, December 12, 2014 12:14:22 AM UTC-5, Iain Dunning wrote:
>>
>> I'm imagine its something like the following pattern:
>>
>> Run 1: generate X garbage
>> Run 2: generate X garbage, for total 2X garbage, which is over threshold,
>> reduce back to 0
>> Run 3: generate X garbage
>> Run 4: generate X garbage, for total 2X garbage, which is over threshold,
>> reduce back to 0
>> and so on
>>
>> On Friday, December 12, 2014 12:09:19 AM UTC-5, Sean McBane wrote:
>>>
>>> Alright. I am curious now as to what causes this behavior; hopefully
>>> someone will offer an explanation.
>>>
>>> I'll be sure to from now on.
>>>
>>> -- Sean
>>>
>>> On Thursday, December 11, 2014 11:07:07 PM UTC-6, John Myles White wrote:

 This is just how the GC works. Someone who's done more work on the GC
 can give you more context about why the GC runs for the length of time it
 runs for at each specific moment that it starts going.

 As a favor to me, can you please make sure that you quote the entire
 e-mail thread you're responding to? I find responding to e-mails without
 context to be pretty jarring.

  -- John

 On Dec 12, 2014, at 12:04 AM, Sean McBane  wrote:

 > Right, I know I'm allocating it and discarding memory. However, if
 the GC cleans up at deterministic points in time, as you point out in your
 first reply, why is timing erratic? And why the regular pattern in timing?
 It's always faster one call, slower one call, faster one call, slower one
 call...



-- 
*Iain Dunning*
PhD Candidate 
 / MIT Operations Research Center 
http://iaindunning.com  /  http://juliaopt.org


Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
Clever.   Thanks!

Petr


On Thursday, December 11, 2014 9:26:34 PM UTC-8, John Myles White wrote:
>
> Petr, 
>
> You should be able to do something like the following: 
>
> function foo(n::Integer) 
> if iseven(n) 
> return 1.0 
> else 
> return 1 
> end 
> end 
>
> function bar1() 
> x = foo(1) 
> return x 
> end 
>
> function bar2() 
> x = foo(1)::Int 
> return x 
> end 
>
> julia> code_typed(bar1, ()) 
> 1-element Array{Any,1}: 
>  :($(Expr(:lambda, Any[], 
> Any[Any[:x],Any[Any[:x,Union(Float64,Int64),18]],Any[]], :(begin  # none, 
> line 2: 
> x = foo(1)::Union(Float64,Int64) # line 3: 
> return x::Union(Float64,Int64) 
> end::Union(Float64,Int64) 
>
> julia> code_typed(bar2, ()) 
> 1-element Array{Any,1}: 
>  :($(Expr(:lambda, Any[], Any[Any[:x],Any[Any[:x,Int64,18]],Any[]], 
> :(begin  # none, line 2: 
> x = (top(typeassert))(foo(1)::Union(Float64,Int64),Int)::Int64 # 
> line 3: 
> return x::Int64 
> end::Int64 
>
> In the output of code_typed, note how the strategically placed type 
> declaration at the point at which a type-unstable function is called 
> resolves the type inference problem completely when you move downstream 
> from the point of ambiguity. 
>
>  -- John 
>
> On Dec 12, 2014, at 12:20 AM, Petr Krysl > 
> wrote: 
>
> > John, 
> > 
> > I hear you.   I agree with you  that type instability is not very 
> helpful,  and indicates  problems with program design. 
> > However, I believe that  provided the program cannot resolve  the types 
> properly  (as it couldn't in the original design of my program, because I 
> haven't provided  declarations  of  variables where they were getting used, 
>  only in their data structure  types that the compiler apparently couldn't 
> see), the optimization  for loop performance  cannot be successful. It 
> certainly wasn't successful in this case. 
> > 
> > How would you solve  the problem with  storing a function and at the 
> same time allowing the compiler to deduce what  values it returns?  In my 
> case I store a function that always returns a floating-point array.   
> However, it may return a constant value supplied as input to the 
> constructor,  or it may return the value provided by another function (that 
> the  user of the type supplied). 
> > 
> > So, the type  of the return value is  stable, but I haven't found a way 
> of informing the compiler that it is so. 
> > 
> > Petr 
> > 
> > 
> > 
> > 
> > On Thursday, December 11, 2014 8:20:20 PM UTC-8, John Myles White wrote: 
> > > The moral of this story is: If you can't or  won't  declare every 
> single variable, don't do loops. They are likely to be a losing 
> proposition. 
> > 
> > I don't think this lesson will serve most people well. It doesn't 
> reflect my experiences using Julia at all. 
> > 
> > My experience is that code that requires variable type declarations 
> usually suffers from a deeper problem that the variable declarations 
> suppress without solving: either (a) there's some insoluble source of 
> ambiguity in the program (as occurs when calling a function that's passed 
> around as a value and therefore not amenable to static analysis) or (b) 
> there's some subtle source of type instability, as happens sometimes when 
> mixing integers and floating point numbers. 
> > 
> >  -- John 
> > 
>
>

Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread John Myles White
Petr,

You should be able to do something like the following:

function foo(n::Integer)
if iseven(n)
return 1.0
else
return 1
end
end

function bar1()
x = foo(1)
return x
end

function bar2()
x = foo(1)::Int
return x
end

julia> code_typed(bar1, ())
1-element Array{Any,1}:
 :($(Expr(:lambda, Any[], 
Any[Any[:x],Any[Any[:x,Union(Float64,Int64),18]],Any[]], :(begin  # none, line 
2:
x = foo(1)::Union(Float64,Int64) # line 3:
return x::Union(Float64,Int64)
end::Union(Float64,Int64)

julia> code_typed(bar2, ())
1-element Array{Any,1}:
 :($(Expr(:lambda, Any[], Any[Any[:x],Any[Any[:x,Int64,18]],Any[]], :(begin  # 
none, line 2:
x = (top(typeassert))(foo(1)::Union(Float64,Int64),Int)::Int64 # line 3:
return x::Int64
end::Int64

In the output of code_typed, note how the strategically placed type declaration 
at the point at which a type-unstable function is called resolves the type 
inference problem completely when you move downstream from the point of 
ambiguity.

 -- John

On Dec 12, 2014, at 12:20 AM, Petr Krysl  wrote:

> John,
> 
> I hear you.   I agree with you  that type instability is not very helpful,  
> and indicates  problems with program design. 
> However, I believe that  provided the program cannot resolve  the types 
> properly  (as it couldn't in the original design of my program, because I 
> haven't provided  declarations  of  variables where they were getting used,  
> only in their data structure  types that the compiler apparently couldn't 
> see), the optimization  for loop performance  cannot be successful. It 
> certainly wasn't successful in this case.
> 
> How would you solve  the problem with  storing a function and at the same 
> time allowing the compiler to deduce what  values it returns?  In my case I 
> store a function that always returns a floating-point array.   However, it 
> may return a constant value supplied as input to the constructor,  or it may 
> return the value provided by another function (that the  user of the type 
> supplied).
> 
> So, the type  of the return value is  stable, but I haven't found a way of 
> informing the compiler that it is so.
> 
> Petr
> 
> 
> 
> 
> On Thursday, December 11, 2014 8:20:20 PM UTC-8, John Myles White wrote:
> > The moral of this story is: If you can't or  won't  declare every single 
> > variable, don't do loops. They are likely to be a losing proposition. 
> 
> I don't think this lesson will serve most people well. It doesn't reflect my 
> experiences using Julia at all. 
> 
> My experience is that code that requires variable type declarations usually 
> suffers from a deeper problem that the variable declarations suppress without 
> solving: either (a) there's some insoluble source of ambiguity in the program 
> (as occurs when calling a function that's passed around as a value and 
> therefore not amenable to static analysis) or (b) there's some subtle source 
> of type instability, as happens sometimes when mixing integers and floating 
> point numbers. 
> 
>  -- John 
> 



Re: [julia-users] Weird timing issue

2014-12-11 Thread Iain Dunning
Also, you can pretty much just do this with inbuilt functions:

function coord(A::SparseMatrixCSC)
rows, cols, vals = findnz(A)
collect(zip(rows,cols))
end


On Friday, December 12, 2014 12:14:22 AM UTC-5, Iain Dunning wrote:
>
> I'm imagine its something like the following pattern:
>
> Run 1: generate X garbage
> Run 2: generate X garbage, for total 2X garbage, which is over threshold, 
> reduce back to 0
> Run 3: generate X garbage
> Run 4: generate X garbage, for total 2X garbage, which is over threshold, 
> reduce back to 0
> and so on
>
> On Friday, December 12, 2014 12:09:19 AM UTC-5, Sean McBane wrote:
>>
>> Alright. I am curious now as to what causes this behavior; hopefully 
>> someone will offer an explanation.
>>
>> I'll be sure to from now on.
>>
>> -- Sean
>>
>> On Thursday, December 11, 2014 11:07:07 PM UTC-6, John Myles White wrote:
>>>
>>> This is just how the GC works. Someone who's done more work on the GC 
>>> can give you more context about why the GC runs for the length of time it 
>>> runs for at each specific moment that it starts going. 
>>>
>>> As a favor to me, can you please make sure that you quote the entire 
>>> e-mail thread you're responding to? I find responding to e-mails without 
>>> context to be pretty jarring. 
>>>
>>>  -- John 
>>>
>>> On Dec 12, 2014, at 12:04 AM, Sean McBane  wrote: 
>>>
>>> > Right, I know I'm allocating it and discarding memory. However, if the 
>>> GC cleans up at deterministic points in time, as you point out in your 
>>> first reply, why is timing erratic? And why the regular pattern in timing? 
>>> It's always faster one call, slower one call, faster one call, slower one 
>>> call... 
>>>
>>>

Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
John,

I hear you.   I agree with you  that type instability is not very helpful,  
and indicates  problems with program design. 
However, I believe that  provided the program cannot resolve  the types 
properly  (as it couldn't in the original design of my program, because I 
haven't provided  declarations  of  variables where they were getting 
used,  only in their data structure  types that the compiler apparently 
couldn't see), the optimization  for loop performance  cannot be 
successful. It certainly wasn't successful in this case.

How would you solve  the problem with  storing a function and at the same 
time allowing the compiler to deduce what  values it returns?  In my case I 
store a function that always returns a floating-point array.   However, it 
may return a constant value supplied as input to the constructor,  or it 
may return the value provided by another function (that the  user of the 
type supplied).

So, the type  of the return value is  stable, but I haven't found a way of 
informing the compiler that it is so.

Petr




On Thursday, December 11, 2014 8:20:20 PM UTC-8, John Myles White wrote:
>
> > The moral of this story is: If you can't or  won't  declare every single 
> variable, don't do loops. They are likely to be a losing proposition. 
>
> I don't think this lesson will serve most people well. It doesn't reflect 
> my experiences using Julia at all. 
>
> My experience is that code that requires variable type declarations 
> usually suffers from a deeper problem that the variable declarations 
> suppress without solving: either (a) there's some insoluble source of 
> ambiguity in the program (as occurs when calling a function that's passed 
> around as a value and therefore not amenable to static analysis) or (b) 
> there's some subtle source of type instability, as happens sometimes when 
> mixing integers and floating point numbers. 
>
>  -- John 
>
>

Re: [julia-users] Weird timing issue

2014-12-11 Thread Iain Dunning
I'm imagine its something like the following pattern:

Run 1: generate X garbage
Run 2: generate X garbage, for total 2X garbage, which is over threshold, 
reduce back to 0
Run 3: generate X garbage
Run 4: generate X garbage, for total 2X garbage, which is over threshold, 
reduce back to 0
and so on

On Friday, December 12, 2014 12:09:19 AM UTC-5, Sean McBane wrote:
>
> Alright. I am curious now as to what causes this behavior; hopefully 
> someone will offer an explanation.
>
> I'll be sure to from now on.
>
> -- Sean
>
> On Thursday, December 11, 2014 11:07:07 PM UTC-6, John Myles White wrote:
>>
>> This is just how the GC works. Someone who's done more work on the GC can 
>> give you more context about why the GC runs for the length of time it runs 
>> for at each specific moment that it starts going. 
>>
>> As a favor to me, can you please make sure that you quote the entire 
>> e-mail thread you're responding to? I find responding to e-mails without 
>> context to be pretty jarring. 
>>
>>  -- John 
>>
>> On Dec 12, 2014, at 12:04 AM, Sean McBane  wrote: 
>>
>> > Right, I know I'm allocating it and discarding memory. However, if the 
>> GC cleans up at deterministic points in time, as you point out in your 
>> first reply, why is timing erratic? And why the regular pattern in timing? 
>> It's always faster one call, slower one call, faster one call, slower one 
>> call... 
>>
>>

Re: [julia-users] Weird timing issue

2014-12-11 Thread Sean McBane
Alright. I am curious now as to what causes this behavior; hopefully 
someone will offer an explanation.

I'll be sure to from now on.

-- Sean

On Thursday, December 11, 2014 11:07:07 PM UTC-6, John Myles White wrote:
>
> This is just how the GC works. Someone who's done more work on the GC can 
> give you more context about why the GC runs for the length of time it runs 
> for at each specific moment that it starts going. 
>
> As a favor to me, can you please make sure that you quote the entire 
> e-mail thread you're responding to? I find responding to e-mails without 
> context to be pretty jarring. 
>
>  -- John 
>
> On Dec 12, 2014, at 12:04 AM, Sean McBane > 
> wrote: 
>
> > Right, I know I'm allocating it and discarding memory. However, if the 
> GC cleans up at deterministic points in time, as you point out in your 
> first reply, why is timing erratic? And why the regular pattern in timing? 
> It's always faster one call, slower one call, faster one call, slower one 
> call... 
>
>

Re: [julia-users] Weird timing issue

2014-12-11 Thread John Myles White
This is just how the GC works. Someone who's done more work on the GC can give 
you more context about why the GC runs for the length of time it runs for at 
each specific moment that it starts going.

As a favor to me, can you please make sure that you quote the entire e-mail 
thread you're responding to? I find responding to e-mails without context to be 
pretty jarring.

 -- John

On Dec 12, 2014, at 12:04 AM, Sean McBane  wrote:

> Right, I know I'm allocating it and discarding memory. However, if the GC 
> cleans up at deterministic points in time, as you point out in your first 
> reply, why is timing erratic? And why the regular pattern in timing? It's 
> always faster one call, slower one call, faster one call, slower one call...



Re: [julia-users] Weird timing issue

2014-12-11 Thread Sean McBane
Right, I know I'm allocating it and discarding memory. However, if the GC 
cleans up at *deterministic* points in time, as you point out in your first 
reply, why is timing erratic? And why the regular pattern in timing? It's 
always faster one call, slower one call, faster one call, slower one call...


Re: [julia-users] Weird timing issue

2014-12-11 Thread John Myles White
Well, you're clearly allocating memory and discarding it since the list 
comprehension and sort both allocate memory. So that's garbage that the GC has 
to deal with. The GC means that your function's timing will be erratic and, 
generally, longer on a second pass than during the first.

 -- John

On Dec 11, 2014, at 11:54 PM, Sean McBane  wrote:

> function getIJValues(A::SparseMatrixCSC)
> m,n = size(A)
> rowcoords = rowvals(A)
> coordinates = []
> for j = 1:n
> append!(coordinates, [(rowcoords[i],j) for i in nzrange(A,j)])
> end
> return sort(coordinates)
> end
> 
> 
> I've never formally studied any computer science or programming so I don't 
> have a great grasp on what goes on underneath this, but it seems to me like 
> the only thing the garbage collector should need to do would be free the 
> memory taken up by the list comprehension inside the loop at the end of the 
> loop. And perhaps this could be redone in a more efficient manner, but 
> inlining it like that seemed most natural. But what's strange is that called 
> twice in a row, with exactly the same input, it takes twice as long the 
> second time as the first.
> 
> Otherwise, I certainly wouldn't be surprised if this method is inherently 
> inefficient, since I only picked up the language yesterday.
> 
> -- Sean



[julia-users] Re: Weird timing issue

2014-12-11 Thread Sean McBane
Oh yeah, and one more note - I'm using the dev build (0.4.0) for access to 
the *rowvals* and *nzrange* functions. 


Re: [julia-users] Weird timing issue

2014-12-11 Thread Sean McBane
function getIJValues(A::SparseMatrixCSC)
m,n = size(A)
rowcoords = rowvals(A)
coordinates = []
for j = 1:n
append!(coordinates, [(rowcoords[i],j) for i in nzrange(A,j)])
end
return sort(coordinates)
end


I've never formally studied any computer science or programming so I don't 
have a great grasp on what goes on underneath this, but it seems to me like 
the only thing the garbage collector should need to do would be free the 
memory taken up by the list comprehension inside the loop at the end of the 
loop. And perhaps this could be redone in a more efficient manner, but 
inlining it like that seemed most natural. But what's strange is that 
called twice in a row, with exactly the same input, it takes twice as long 
the second time as the first.

Otherwise, I certainly wouldn't be surprised if this method is inherently 
inefficient, since I only picked up the language yesterday.

-- Sean


Re: [julia-users] Weird error about nonexistence of a method

2014-12-11 Thread Stefan Karpinski
It is possible that you have managed to get into a state where there are two 
different types by the name Params.


> On Dec 11, 2014, at 9:10 PM, Test This  wrote:
> 
> 
> I am running into what appears to be weird error. I have this function 
> simulate that takes two arguments. When I try to run the file containing this 
> function I get 
> the following error. I have added println( methods(simulate) ) to the code so 
> that you can see its methods.
> 
> 
> # 1 method for generic function "simulate":
> simulate(params::Params,rseed::Int64) at /Users/code/simulationcode.jl:340
> ERROR: `simulate` has no method matching simulate(::Params, ::Int64)
> 
> Are the 2nd and 3rd lines not contradictory? 
> 
> Thanks in advance for your help.


Re: [julia-users] Need help to understand method dispatch with modules

2014-12-11 Thread elextr


On Friday, December 12, 2014 10:36:40 AM UTC+10, samoconnor wrote:
>
> On Friday, December 12, 2014 9:58:57 AM UTC+11, ele...@gmail.com wrote:
>>
>> See open Issues https://github.com/JuliaLang/julia/issues/2327 and 
>> https://github.com/JuliaLang/julia/issues/4345
>>
>
> Ok, I take it that the short answer, from #4345 is that the intended 
> behaviour is not well thought out or well defined yet. 
>


Yeah, that was the message I was getting from those. 


[julia-users] Re: Weird timing issue

2014-12-11 Thread Sean McBane
As a note, the amount of memory allocated in both cases is the same. The 
difference seems to be only caused by garbage collection.

On Thursday, December 11, 2014 10:43:03 PM UTC-6, Sean McBane wrote:
>
> Hi all,
>
> So, I'm starting to define a couple of routines to be used in iterative 
> solvers, optimized for sparse matrices. I wrote a routine that returns me a 
> list of tuples containing the (i,j) coordinates for the non-zero values 
> from a sparse matrix and was testing it for timing, but observed a 
> performance issue I don't understand.
>
> Testing using a 100x100 sparse matrix from a sample finite 
> difference problem, containing about 500 non-zero values, @time shows a 
> time of ~20s the first time I load the module and execute the function, and 
> about 3-5% of that is gc. I'm sure I can speed that up later, but the 
> really odd thing is that the NEXT run, it takes ~40s and about 50% of that 
> is garbage collection. The next one after that is back to 20s, and it keeps 
> going back and forth every time. This seems really weird to me.
>
> Anyway, to confirm that I wasn't going crazy I wrote a loop and timed this 
> a hundred times and collected results, and it keeps following the same 
> pattern. See attached 'times.txt' with the numbers. Any ideas what could be 
> causing this behavior?
>
> Thanks,
>
> -- Sean
>


Re: [julia-users] Weird timing issue

2014-12-11 Thread John Myles White
Are you creating a bunch of garbage? My understanding is that any garbage that 
gets created will be cleaned up at seemingly haphazard (but fully 
deterministic) points in times.

 -- John

On Dec 11, 2014, at 11:43 PM, Sean McBane  wrote:

> Hi all,
> 
> So, I'm starting to define a couple of routines to be used in iterative 
> solvers, optimized for sparse matrices. I wrote a routine that returns me a 
> list of tuples containing the (i,j) coordinates for the non-zero values from 
> a sparse matrix and was testing it for timing, but observed a performance 
> issue I don't understand.
> 
> Testing using a 100x100 sparse matrix from a sample finite difference 
> problem, containing about 500 non-zero values, @time shows a time of ~20s 
> the first time I load the module and execute the function, and about 3-5% of 
> that is gc. I'm sure I can speed that up later, but the really odd thing is 
> that the NEXT run, it takes ~40s and about 50% of that is garbage collection. 
> The next one after that is back to 20s, and it keeps going back and forth 
> every time. This seems really weird to me.
> 
> Anyway, to confirm that I wasn't going crazy I wrote a loop and timed this a 
> hundred times and collected results, and it keeps following the same pattern. 
> See attached 'times.txt' with the numbers. Any ideas what could be causing 
> this behavior?
> 
> Thanks,
> 
> -- Sean



[julia-users] Weird timing issue

2014-12-11 Thread Sean McBane
Hi all,

So, I'm starting to define a couple of routines to be used in iterative 
solvers, optimized for sparse matrices. I wrote a routine that returns me a 
list of tuples containing the (i,j) coordinates for the non-zero values 
from a sparse matrix and was testing it for timing, but observed a 
performance issue I don't understand.

Testing using a 100x100 sparse matrix from a sample finite 
difference problem, containing about 500 non-zero values, @time shows a 
time of ~20s the first time I load the module and execute the function, and 
about 3-5% of that is gc. I'm sure I can speed that up later, but the 
really odd thing is that the NEXT run, it takes ~40s and about 50% of that 
is garbage collection. The next one after that is back to 20s, and it keeps 
going back and forth every time. This seems really weird to me.

Anyway, to confirm that I wasn't going crazy I wrote a loop and timed this 
a hundred times and collected results, and it keeps following the same 
pattern. See attached 'times.txt' with the numbers. Any ideas what could be 
causing this behavior?

Thanks,

-- Sean


Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread John Myles White
> The moral of this story is: If you can't or  won't  declare every single 
> variable, don't do loops. They are likely to be a losing proposition.

I don't think this lesson will serve most people well. It doesn't reflect my 
experiences using Julia at all.

My experience is that code that requires variable type declarations usually 
suffers from a deeper problem that the variable declarations suppress without 
solving: either (a) there's some insoluble source of ambiguity in the program 
(as occurs when calling a function that's passed around as a value and 
therefore not amenable to static analysis) or (b) there's some subtle source of 
type instability, as happens sometimes when mixing integers and floating point 
numbers.

 -- John



[julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
Everybody:

Thank you very much for your  advice. I believe that I have gotten the 
loops  working  now.

The method conductivity(), 2 million triangles
With loops expanded: 25.294999069 seconds (5253831296 bytes
With matrix operations: 29.744513256 seconds (6564763296

So now the loops  win by about 15%  of time consumed.

The key was apparently  to have  every single variable  declared. I'm not  
quite sure  how the compiler is able to pursue the tree of  interdependent 
modules to ferret out  the types, but I think it must have missed a few 
(the modules  properly declared  the types, but I'm not sure if the 
compiler  saw them). Therefore I believe  the loops were losing heavily  
compared to matrix operations.

So I went in and  declared  everything I could get my hands on. That made 
the loops competitive.

The moral of this story is: If you can't or  won't  declare every single 
variable, don't do loops. They are likely to be a losing proposition.

The updated code is at 
https://gist.github.com/PetrKryslUCSD/7bd14515e1a853275923

Petr

PS: One particular problem was  that I was calling the function  stored  
as  type  Function. Being new to the language,  I don't know if I can 
declare the function  to have a type  that would indicate the type of the 
return value. (Is it possible?) Without this information,  the compiler is 
apparently stuck with  data whose types need to be determined  at runtime. 
Unfortunately, the compiler  does not throw fits  when it does not know the 
type, and  then  bad things can happen  as in the present example  where 
expanding the loops  was  twice as slow  as doing a matrix operation.



[julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
Robert,

This is very nice. Basically  it confirms that if  every single variable is 
properly declared  and the compiler can make  all its optimizations,  then 
the loops  have a chance of working.

I got a bit lost  in the follow-up discussion: I think the message chain  
might have been broken.

Petr


On Thursday, December 11, 2014 2:05:40 PM UTC-8, Robert Gates wrote:
>
> Hi Petr,
>
> I just tried the devectorized problem, although I did choose to go a bit 
> of a different route: 
> https://gist.github.com/rleegates/2d99e6251fe246b017ac   
> I am not sure that this is what you intended, however, using the 
> vectorized code as a reference, I do obtain the same results up to machine 
> epsilon.
>
> Anyways, I got:
>
> In  [4]: keTest(200_000)
> Vectorized:
> elapsed time: 0.426404203 seconds (140804768 bytes allocated, 22.42% gc 
> time)
> DeVectorized:
> elapsed time: 0.078519349 seconds (128 bytes allocated)
> DeVectorized InBounds:
> elapsed time: 0.032812311 seconds (128 bytes allocated)
> Error norm deVec: 0.0
> Error norm inBnd: 0.0
>
> On Thursday, December 11, 2014 5:47:01 PM UTC+1, Petr Krysl wrote:
>>
>> Acting upon the advice that replacing matrix-matrix multiplications in 
>> vectorized form with loops would help with performance, I chopped out a 
>> piece of code from my finite element solver (
>> https://gist.github.com/anonymous/4ec426096c02faa4354d) and ran some 
>> tests with the following results:
>>
>> Vectorized code:
>> elapsed time: 0.326802682 seconds (134490340 bytes allocated, 17.06% gc 
>> time)
>>
>> Loops code:
>> elapsed time: 4.681451441 seconds (997454276 bytes allocated, 9.05% gc 
>> time) 
>>
>> SLOWER and using MORE memory?!
>>
>> I must be doing something terribly wrong.
>>
>> Petr
>>
>>

[julia-users] Weird error about nonexistence of a method

2014-12-11 Thread Test This

I am running into what appears to be weird error. I have this function 
simulate that takes two arguments. When I try to run the file containing 
this function I get 
the following error. I have added println( methods(simulate) ) to the code 
so that you can see its methods.


*# 1 method for generic function "simulate":*
*simulate(params::Params,rseed::Int64) at /Users/code/simulationcode.jl:340*
*ERROR: `simulate` has no method matching simulate(::Params, ::Int64)*


Are the 2nd and 3rd lines not contradictory? 

Thanks in advance for your help.


Re: [julia-users] Re: Community Support for Julia on Travis CI

2014-12-11 Thread Pontus Stenetorp
On 12 December 2014 at 00:28, Stefan Karpinski  wrote:
>
> Btw, can I be added to the JuliaCI org?

No idea who, but someone has taken care of it.  If someone else wants
to join, just give us a poke.

On 12 December 2014 at 00:49, Nils Gudat  wrote:
>
> Excuse the ignorant question, but what exactly does this mean? I haven't
> seen Travis CI before and clicking on the home page is slightly confusing...

Ignorance is fine, at least in academia you need some of it in order
to accomplish anything.  I agree that the homepage is a mess,
essentially they have a bunch of virtual servers that check if you
have a pending pull request or made a push to your repository.  Once
this happens they will pull the code from GitHub and build/run tests
on their machines to check that everything is all right.  If something
is wrong, you will get an e-mail/notification on GitHub.  This enables
you to test against multiple versions of Julia and saves you the time
to pull code from other people to test locally on your machine before
accepting pull requests.  It is worth pointing out that I am by no
means a testing fanatic, but I still really enjoy the service that
Travis provide.

Pontus


Re: [julia-users] Define composite types in a different file - constructor not defined error

2014-12-11 Thread John Myles White
http://julia.readthedocs.org/en/release-0.3/manual/modules/

 -- John

On Dec 11, 2014, at 8:55 PM, Test This  wrote:

> Thank you, John. That worked!
> 
> Could you please direct me to a reference which explains when one should use 
> include/require/import/using?
> 
> Thank you.
> 
> On Thursday, December 11, 2014 7:39:23 PM UTC-5, John Myles White wrote:
> You want include, not require.
> 
>  -- John
> 
> On Dec 11, 2014, at 7:25 PM, Test This  wrote:
> 
>> 
>> I have two files: dataTypes.jl and paramcombos.jl
>> 
>> In dataTypes.jl I have 
>> 
>> type Params
>>  .
>>  . // field names and types
>>  .
>> end
>> 
>> 
>> In paramcombos.jl I have 
>> 
>> module paramcombos
>> 
>> require("dataTypes.jl")
>> 
>> function baseParams()
>>params = Params( field1 = blah1, field2 = blah2, ...)
>> end
>> 
>> end
>>  
>> 
>> In the julia repl if I do 
>> 
>> require("paramcombos.jl")
>> 
>> and then, 
>> 
>> basep = paramcombos.baseParams()
>> 
>> I get an error saying:
>> 
>> ERROR: Params not defined
>>  in baseParams at /Users/code/paramcombos.jl:33 (where 33 is the line shown 
>> above from baseParams() function. 
>> 
>> If I move type declaration to paramcombos.jl, things work fine. Is there a 
>> way to keep type definitions in one file and use the constructor in another 
>> file?
>> 
>> Thank you
> 



Re: [julia-users] Define composite types in a different file - constructor not defined error

2014-12-11 Thread Test This
Thank you, John. That worked!

Could you please direct me to a reference which explains when one should 
use include/require/import/using?

Thank you.

On Thursday, December 11, 2014 7:39:23 PM UTC-5, John Myles White wrote:
>
> You want include, not require.
>
>  -- John
>
> On Dec 11, 2014, at 7:25 PM, Test This > 
> wrote:
>
>
> I have two files: dataTypes.jl and paramcombos.jl
>
> In dataTypes.jl I have 
>
> *type Params*
> * .*
> * . // field names and types*
> * .*
> *end*
>
>
>
> In paramcombos.jl I have 
>
> *module paramcombos*
>
> *require("dataTypes.jl")*
>
>
> *function baseParams()*
> *   params = Params( field1 = blah1, field2 = blah2, ...)*
> *end*
>
>
> *end*
>
>  
>
> In the julia repl if I do 
>
> *require("paramcombos.jl")*
>
>
> and then, 
>
> *basep = paramcombos.baseParams()*
>
>
> I get an error saying:
>
> *ERROR: Params not defined*
> * in baseParams at /Users/code/paramcombos.jl:33 (where 33 is the line 
> shown above from baseParams() function. *
>
>
> If I move type declaration to paramcombos.jl, things work fine. Is there a 
> way to keep type definitions in one file and use the constructor in another 
> file?
>
> Thank you
>
>
>

Re: [julia-users] Re: home page content

2014-12-11 Thread cdm

in support of "Why Julia", it seems that fact that Julia is attracting some 
of the best
and brightest minds spanning a diverse collection of fields ought to be 
displayed
prominently ...

as an example, perusing the COIN-OR Cup winners list returns several 
familiar
names  ( see http://www.coin-or.org/coinCup/coinCup.html ) ... speaking of 
which,
it looks as though the cup needs to come home next year, as it is off for a 
stay in
Python land  ( http://www.coin-or.org/coinCup/coinCup2014Winner.html ) and
when it has been won again, the string "Julia" should be featured in the 
team/
application name.


in addition to INFORMS, there are other "conferences" to target ...
the hit-list:

http://en.wikipedia.org/wiki/List_of_computer_science_conferences


stay the course, Julians ...

cdm



On Thursday, December 11, 2014 3:40:52 AM UTC-8, Christoph Ortner wrote:
>
> I'm glad people agree with my "Why Julia" paragraph.
>


Re: [julia-users] Define composite types in a different file - constructor not defined error

2014-12-11 Thread John Myles White
You want include, not require.

 -- John

On Dec 11, 2014, at 7:25 PM, Test This  wrote:

> 
> I have two files: dataTypes.jl and paramcombos.jl
> 
> In dataTypes.jl I have 
> 
> type Params
>  .
>  . // field names and types
>  .
> end
> 
> 
> In paramcombos.jl I have 
> 
> module paramcombos
> 
> require("dataTypes.jl")
> 
> function baseParams()
>params = Params( field1 = blah1, field2 = blah2, ...)
> end
> 
> end
>  
> 
> In the julia repl if I do 
> 
> require("paramcombos.jl")
> 
> and then, 
> 
> basep = paramcombos.baseParams()
> 
> I get an error saying:
> 
> ERROR: Params not defined
>  in baseParams at /Users/code/paramcombos.jl:33 (where 33 is the line shown 
> above from baseParams() function. 
> 
> If I move type declaration to paramcombos.jl, things work fine. Is there a 
> way to keep type definitions in one file and use the constructor in another 
> file?
> 
> Thank you



Re: [julia-users] Need help to understand method dispatch with modules

2014-12-11 Thread samoconnor
On Friday, December 12, 2014 9:58:57 AM UTC+11, ele...@gmail.com wrote:
>
> See open Issues https://github.com/JuliaLang/julia/issues/2327 and 
> https://github.com/JuliaLang/julia/issues/4345
>

Ok, I take it that the short answer, from #4345 is that the intended 
behaviour is not well thought out or well defined yet. 


[julia-users] Define composite types in a different file - constructor not defined error

2014-12-11 Thread Test This

I have two files: dataTypes.jl and paramcombos.jl

In dataTypes.jl I have 

*type Params*
* .*
* . // field names and types*
* .*
*end*



In paramcombos.jl I have 

*module paramcombos*

*require("dataTypes.jl")*


*function baseParams()*
*   params = Params( field1 = blah1, field2 = blah2, ...)*
*end*


*end*

 

In the julia repl if I do 

*require("paramcombos.jl")*


and then, 

*basep = paramcombos.baseParams()*


I get an error saying:

*ERROR: Params not defined*
* in baseParams at /Users/code/paramcombos.jl:33 (where 33 is the line 
shown above from baseParams() function. *


If I move type declaration to paramcombos.jl, things work fine. Is there a 
way to keep type definitions in one file and use the constructor in another 
file?

Thank you


Re: [julia-users] LLVM3.2 and JULIA BUILD PROBLEM

2014-12-11 Thread Keno Fischer
LLVM 3.2 is no longer supported. I wouldn't be opposed to a patch
supporting 3.2, since haven't formally dropped support (i.e. there's still
some ifdefs in the code) for it yet - it's just that nobody is using it
anymore.

On Thu, Dec 11, 2014 at 5:21 PM, Stefan Karpinski <
stefan.karpin...@gmail.com> wrote:

> LLVM 3.2 is no longer supported – the default Julia version of LLVM is 3.3.
>
>
> On Dec 11, 2014, at 4:58 PM, John Myles White 
> wrote:
>
> My understanding is that different versions of LLVM are enormously
> different and that there's no safe way to make Julia work with any version
> of LLVM other than the intended one.
>
>  -- John
>
> On Dec 11, 2014, at 4:56 PM, Vehbi Eşref Bayraktar <
> vehbi.esref.bayrak...@gmail.com> wrote:
>
> Hi;
>
> I am using llvm 3.2 with libnvvm . However when i try to build julia using
> those 2 flags :
> USE_SYSTEM_LLVM = 1
> USE_LLVM_SHLIB = 1
>
> I have a bunch of errors. starting as following:
>
> codegen.cpp: In function ‘void jl_init_codegen()’:
> codegen.cpp:4886:26: error: ‘getProcessTriple’ is not a member of
> ‘llvm::sys’
>  Triple TheTriple(sys::getProcessTriple()); // *llvm32 doesn't
> have this one instead it has getDefaultTargetTriple()*
>   ^
> codegen.cpp:4919:5: error: ‘mbuilder’ was not declared in this scope
>  mbuilder = new MDBuilder(getGlobalContext());  //  *include
>  would fix this*
>  ^
> codegen.cpp:4919:20: error: expected type-specifier before ‘MDBuilder’
>  mbuilder = new MDBuilder(getGlobalContext());
>
> Even you fix these errors, you keep hitting the following ones:
> In file included from codegen.cpp:976:0:
> intrinsics.cpp: In function ‘llvm::Value* emit_intrinsic(JL_I::intrinsic,
> jl_value_t**, size_t, jl_codectx_t*)’:
> intrinsics.cpp:1158:72: error: ‘ceil’ is not a member of ‘llvm::Intrinsic’
>  return builder.CreateCall(Intrinsic::getDeclaration(jl_Module,
> Intrinsic::ceil,
>
>
>
> So is the master branch currently supporting llvm32? Or is there a patch
> somewhere?
>
> Thanks
>
>
>


Re: [julia-users] Re: Need help to understand method dispatch with modules

2014-12-11 Thread samoconnor
I've just done a fresh build from git HEAD (julia version 0.4.0-dev+2067).
I don't see any difference in behaviour between 0.3.0-RC4 and 0.4 for the 
examples I have posted.

On Friday, December 12, 2014 10:58:19 AM UTC+11, Rob J Goedman wrote:
>
> Hmmm, now I think early on I saw you’re on Julia 0.3.0-RC4?
>
> I tried it on 0.3.3 and 0.4 both with the same output. Could that explain 
> the difference?
>
> Rob J. Goedman
> goe...@mac.com 
>
>
> *julia> *
> *include("/Users/rob/Projects/Julia/Rob/MetaProgramming/meta13.jl")*
> Int: 7
> ASCIIString: Foo
>   UTF8String: ∀ x ∃ y
>   Float64: 12.0
>   Complex{Float64}: 2.0 + 3.0im
>
>
>  
> On Dec 11, 2014, at 3:53 PM, samoconnor > 
> wrote:
>
> Hi Rob,
>
> I don't use the REPL. I have "#!/[...]bin/julia" on the first line of the 
> script and run ./script.jl from the command line.
>
> On Friday, December 12, 2014 10:27:45 AM UTC+11, Rob J Goedman wrote:
>>
>> Did you restart the REPL?
>>
>> Rob J. Goedman
>> goe...@mac.com
>>
>>
>>
>>
>>  
>> On Dec 11, 2014, at 3:19 PM, samoconnor  wrote:
>>
>> If I change the example to use "import" instead of "using"...
>>
>> import m1: f
>> import m2: f
>>
>> ... then I get:
>>
>> Warning: ignoring conflicting import of m2.f into Main
>>  ?: 7
>> String: Foo
>>
>> Now Julia spots the problem, but resolves it the opposite way (i.e. the 
>> first definition wins).
>>
>>
>>
>

Re: [julia-users] Re: Need help to understand method dispatch with modules

2014-12-11 Thread Rob J. Goedman
Hmmm, now I think early on I saw you’re on Julia 0.3.0-RC4?

I tried it on 0.3.3 and 0.4 both with the same output. Could that explain the 
difference?

Rob J. Goedman
goed...@mac.com


julia> include("/Users/rob/Projects/Julia/Rob/MetaProgramming/meta13.jl")
Int: 7
ASCIIString: Foo
  UTF8String: ∀ x ∃ y
  Float64: 12.0
  Complex{Float64}: 2.0 + 3.0im



> On Dec 11, 2014, at 3:53 PM, samoconnor  wrote:
> 
> Hi Rob,
> 
> I don't use the REPL. I have "#!/[...]bin/julia" on the first line of the 
> script and run ./script.jl from the command line.
> 
> On Friday, December 12, 2014 10:27:45 AM UTC+11, Rob J Goedman wrote:
> Did you restart the REPL?
> 
> Rob J. Goedman
> goe...@mac.com 
> 
> 
> 
> 
> 
>> On Dec 11, 2014, at 3:19 PM, samoconnor > 
>> wrote:
>> 
>> If I change the example to use "import" instead of "using"...
>> 
>> import m1: f
>> import m2: f
>> 
>> ... then I get:
>> 
>> Warning: ignoring conflicting import of m2.f into Main
>>  ?: 7
>> String: Foo
>> 
>> Now Julia spots the problem, but resolves it the opposite way (i.e. the 
>> first definition wins).
> 



Re: [julia-users] Re: Need help to understand method dispatch with modules

2014-12-11 Thread samoconnor
Hi Rob,

I don't use the REPL. I have "#!/[...]bin/julia" on the first line of the 
script and run ./script.jl from the command line.

On Friday, December 12, 2014 10:27:45 AM UTC+11, Rob J Goedman wrote:
>
> Did you restart the REPL?
>
> Rob J. Goedman
> goe...@mac.com 
>
>
>
>
>  
> On Dec 11, 2014, at 3:19 PM, samoconnor > 
> wrote:
>
> If I change the example to use "import" instead of "using"...
>
> import m1: f
> import m2: f
>
> ... then I get:
>
> Warning: ignoring conflicting import of m2.f into Main
>  ?: 7
> String: Foo
>
> Now Julia spots the problem, but resolves it the opposite way (i.e. the 
> first definition wins).
>
>
>

Re: [julia-users] Help with types (Arrays)

2014-12-11 Thread Stefan Karpinski
On Thu, Dec 11, 2014 at 3:41 PM, S  wrote:

> Thank you. I'm sorry I didn't see this before posting. I'll have a look.


No need to apologize, this part of Julia's type system can be confusing
coming from languages that have covariant parametric types.


Re: [julia-users] Re: EACCESS errors when starting julia 0.3.3

2014-12-11 Thread Stefan Karpinski
Thank you!

On Thu, Dec 11, 2014 at 5:31 PM, Robbin Bonthond 
wrote:

> https://github.com/JuliaLang/julia/issues/9319 has been filed
>
> On Thursday, December 11, 2014 3:07:06 PM UTC-6, Stefan Karpinski wrote:
>>
>> On Thu, Dec 11, 2014 at 4:05 PM, Robbin Bonthond 
>> wrote:
>>
>>> for some reason the binary installer of julia uses restricted group
>>> permissions
>>
>>
>> That sounds like a potential problem – would you mind filing an issue?
>>
>> https://github.com/JuliaLang/julia/issues
>>
>


Re: [julia-users] Re: Need help to understand method dispatch with modules

2014-12-11 Thread Rob J. Goedman
Did you restart the REPL?

Rob J. Goedman
goed...@mac.com





> On Dec 11, 2014, at 3:19 PM, samoconnor  wrote:
> 
> If I change the example to use "import" instead of "using"...
> 
> import m1: f
> import m2: f
> 
> ... then I get:
> 
> Warning: ignoring conflicting import of m2.f into Main
>  ?: 7
> String: Foo
> 
> Now Julia spots the problem, but resolves it the opposite way (i.e. the first 
> definition wins).



Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Robert Gates
Hi Ivar,

yeah, I know, I thought Andreas was replying to my deleted post. I think 
Petr already solved the problem with the globals (his gist was apparently 
not the right context). However, he still reported:

On Thursday, December 11, 2014 6:47:33 PM UTC+1, Petr Krysl wrote:
>
> One more note: I conjectured that perhaps the compiler was not able to 
> infer correctly the type of the matrices,  so I hardwired (in the actual FE 
> code)
>
> Jac = 1.0; gradN = gradNparams[j]/(J); # get rid of Rm for the moment
>
> About 10% less memory used, runtime about the same.  So, no effect really. 
> Loops are still slower than the vectorized code by a factor of two.
>
> Petr
>
>
Best,

Robert

On Friday, December 12, 2014 12:01:44 AM UTC+1, Ivar Nesje wrote:
>
> https://gist.github.com/anonymous/4ec426096c02faa4354d#comment-1354636



Re: [julia-users] BinDeps fails to find a built dependency (but only on Travis)

2014-12-11 Thread Michael Eastwood
The output of BinDeps.debug("CasaCore") on Travis is

INFO: Reading build script...

The package declares 4 dependencies.

- Library "libblas"

   - Satisfied by:

 - System Paths at /usr/lib/libopenblas.so.0

   - Providers:

 - BinDeps.AptGet package libopenblas-dev

- Library "libcasa_tables"

   - Satisfied by:

 - Simple Build Process at 
/home/travis/.julia/v0.4/CasaCore/deps/usr/lib/libcasa_tables.so

   - Providers:

 - Simple Build Process

- Library "libcasa_measures"

   - Satisfied by:

 - Simple Build Process at 
/home/travis/.julia/v0.4/CasaCore/deps/usr/lib/libcasa_measures.so

   - Providers:

 - Simple Build Process

- Library "libcasacorewrapper"

   - Providers:

 - Simple Build Process

On Wednesday, December 10, 2014 1:55:18 PM UTC-8, Elliot Saba wrote:
>
> It would be helpful if you could have julia execute "using BinDeps; 
> BinDeps.debug("CasaCore")" after attempting to build.
>
>
>
> It also looks like you have another error that we should look into:
> Warning: error initializing module GMP:
>
> ErrorException("The dynamically loaded GMP library (version 5.0.2 with 
> __gmp_bits_per_limb == 64)
>
> does not correspond to the compile time version (version 5.1.3 with 
> __gmp_bits_per_limb == 64).
>
> Please rebuild Julia.")
> Has Travis changed Ubuntu releases recently or something?
>


[julia-users] Re: Need help to understand method dispatch with modules

2014-12-11 Thread samoconnor
If I change the example to use "import" instead of "using"...

import m1: f
import m2: f

... then I get:

Warning: ignoring conflicting import of m2.f into Main
 ?: 7
String: Foo

Now Julia spots the problem, but resolves it the opposite way (i.e. the 
first definition wins).


Re: [julia-users] Need help to understand method dispatch with modules

2014-12-11 Thread Rob J. Goedman
Yes, I am struggling with this aspect as well and recently posed a related 
question (’Struggling with generic functions’) although slightly differently 
formulated.

John’s answer included below.

In your example that might look like below update of the example I think. Like 
you (I think) I was looking to hide this from end users.

Rob J. Goedman
goed...@mac.com

module m1

  export f

  f(x::ASCIIString) = println("ASCIIString: " * x)
  f{T<:String}(x::T) = println("  $(typeof(x)): " * x)
  f(x) = println("  $(typeof(x)): " * string(x))
end


module m2
  export f

  f(x::Int)= println("Int: " * string(x))
end

import m1.f
import m2.f

f(7)
f("Foo")
f("\u2200 x \u2203 y")
f(12.0)
f(2.0+3.0im)



On Dec 2, 2014, at 4:37 PM, John Myles White  wrote:

There's no clean solution to this. In general, I'd argue that we should stop 
exporting so many names and encourage people to use qualified names much more 
often than we do right now.

But for important abstractions, we can put them into StatsBase, which all stats 
packages should be derived from.

-- John

On Dec 2, 2014, at 4:34 PM, Rob J. Goedman  wrote:

> I’ll try to give an example of my problem based on how I’ve seen it occur in 
> Stan.jl and Jags.jl.
> 
> Both DataFrames.jl and Mamba.jl export describe(). Stan.jl relies on Mamba, 
> but neither Stan or Mamba need DataFrames. So DataFrames is not imported by 
> default.
> 
> Recently someone used Stan and wanted to read in a .csv file and added 
> DataFrames to the using clause in the script, i.e.
> 
> ```
> using Gadfly, Stan, Mamba, DataFrames
> ```
> 
> After running a simulation, Mamba’s describe(::Mamba.Chains) could no longer 
> be found.
> 
> I wonder if someone can point me in the right direction how best to solve 
> these kind of problems (for end users):
> 
> 1. One way around it is to always qualify describe(), e.g. Mamba.describe().
> 2. Use isdefined(Main, :DataFrames) to upfront test for such a collision.
> 3. Suggest to end users to import DataFrames and qualify e.g. 
> DataFrames.readtable().
> 4. ?
> 
> Thanks and regards,
> Rob J. Goedman
> goed...@mac.com 



> On Dec 11, 2014, at 2:52 PM, samoconnor  wrote:
> 
> Hi Rob,
> 
> Ok, I see why that "works", but it's a different example.
> 
> Assume that m1 and m2 are libraries from different vendors, they know nothing 
> about each other, but they both export methods for f().
> 
> It is surprising to me that importing two modules would cause one to 
> overwrite methods from the other with no warning or error. 
> 
> On Friday, December 12, 2014 9:45:37 AM UTC+11, Rob J Goedman wrote:
> Sam,
> 
> Maybe below slightly expanded version of your example will help.
> 
> I think key is to import m1.f in module m2
> 
> Regards
> Rob J. Goedman
> goe...@mac.com 
> 
> 
> module m1
> 
>   export f
> 
>   f(x::ASCIIString) = println("ASCIIString: " * x)
>   f{T<:String}(x::T) = println("  $(typeof(x)): " * x)
>   f(x) = println("  $(typeof(x)): " * string(x))
> end
> 
> 
> module m2
> 
>   import m1.f
>   export f
> 
>   f(x::Int)= println("Int: " * string(x))
> end
> 
> using m1
> using m2
> 
> f(7)
> f("Foo")
> f("\u2200 x \u2203 y")
> f(12.0)
> f(2.0+3.0im)
> 
> 
> 
> 
>> On Dec 11, 2014, at 2:18 PM, samoconnor > 
>> wrote:
>> 
>> The example below has two modules that define methods of function f for 
>> different parameter types.
>> Both modules are imported.
>> It seems like that "using" the second module causes the first one to 
>> disappear.
>> Is that the intended behaviour?
>> 
>> 
>> !/Applications/Julia-0.3.0-rc4.app/Contents/Resources/julia/bin/julia
>> 
>> module m1
>> 
>> export f
>> 
>> f(x::String) = println("String: " * x)
>> f(x) = println(" ?: " * string(x))
>> end
>> 
>> 
>> module m2
>> 
>> export f
>> 
>> f(x::Int)= println("   Int: " * string(x))
>> end
>> 
>> using m1
>> using m2
>> 
>> f(7)
>> f("Foo")
>> 
>> output:
>> 
>>Int: 7
>> ERROR: `f` has no method matching f(::ASCIIString)
>> 
>> 
> 



Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Ivar Nesje
https://gist.github.com/anonymous/4ec426096c02faa4354d#comment-1354636

[julia-users] `;` output supressor behaves oddly with commented lines.

2014-12-11 Thread Ivar Nesje
I expect it's the same everywhere.

See https://github.com/JuliaLang/julia/issues/6225

The ; to suppress output is not implemented in the parser, but as a simple 
search for a trailing ;

Re: [julia-users] Need help to understand method dispatch with modules

2014-12-11 Thread elextr
See open Issues https://github.com/JuliaLang/julia/issues/2327 
and https://github.com/JuliaLang/julia/issues/4345

On Friday, December 12, 2014 8:52:38 AM UTC+10, samoconnor wrote:
>
> Hi Rob,
>
> Ok, I see why that "works", but it's a different example.
>
> Assume that m1 and m2 are libraries from different vendors, they know 
> nothing about each other, but they both export methods for f().
>
> It is surprising to me that importing two modules would cause one to 
> overwrite methods from the other with no warning or error. 
>
> On Friday, December 12, 2014 9:45:37 AM UTC+11, Rob J Goedman wrote:
>>
>> Sam,
>>
>> Maybe below slightly expanded version of your example will help.
>>
>> I think key is to import m1.f in module m2
>>
>> Regards
>> Rob J. Goedman
>> goe...@mac.com
>>
>>
>> module m1
>>
>>   export f
>>
>>   f(x::ASCIIString) = println("ASCIIString: " * x)
>>   f{T<:String}(x::T) = println("  $(typeof(x)): " * x)
>>   f(x) = println("  $(typeof(x)): " * string(x))
>> end
>>
>>
>> module m2
>>
>>   import m1.f
>>   export f
>>
>>   f(x::Int)= println("Int: " * string(x))
>> end
>>
>> using m1
>> using m2
>>
>> f(7)
>> f("Foo")
>> f("\u2200 x \u2203 y")
>> f(12.0)
>> f(2.0+3.0im)
>>
>>
>>
>>  
>> On Dec 11, 2014, at 2:18 PM, samoconnor  wrote:
>>
>> The example below has two modules that define methods of function f for 
>> different parameter types.
>> Both modules are imported.
>> It seems like that "using" the second module causes the first one to 
>> disappear.
>> Is that the intended behaviour?
>>
>>
>> !/Applications/Julia-0.3.0-rc4.app/Contents/Resources/julia/bin/julia
>>
>> module m1
>>
>> export f
>>
>> f(x::String) = println("String: " * x)
>> f(x) = println(" ?: " * string(x))
>> end
>>
>>
>> module m2
>>
>> export f
>>
>> f(x::Int)= println("   Int: " * string(x))
>> end
>>
>> using m1
>> using m2
>>
>> f(7)
>> f("Foo")
>>
>> output:
>>
>>Int: 7
>> ERROR: `f` has no method matching f(::ASCIIString)
>>
>>
>>
>>

Re: [julia-users] Need help to understand method dispatch with modules

2014-12-11 Thread samoconnor
Hi Rob,

Ok, I see why that "works", but it's a different example.

Assume that m1 and m2 are libraries from different vendors, they know 
nothing about each other, but they both export methods for f().

It is surprising to me that importing two modules would cause one to 
overwrite methods from the other with no warning or error. 

On Friday, December 12, 2014 9:45:37 AM UTC+11, Rob J Goedman wrote:
>
> Sam,
>
> Maybe below slightly expanded version of your example will help.
>
> I think key is to import m1.f in module m2
>
> Regards
> Rob J. Goedman
> goe...@mac.com 
>
>
> module m1
>
>   export f
>
>   f(x::ASCIIString) = println("ASCIIString: " * x)
>   f{T<:String}(x::T) = println("  $(typeof(x)): " * x)
>   f(x) = println("  $(typeof(x)): " * string(x))
> end
>
>
> module m2
>
>   import m1.f
>   export f
>
>   f(x::Int)= println("Int: " * string(x))
> end
>
> using m1
> using m2
>
> f(7)
> f("Foo")
> f("\u2200 x \u2203 y")
> f(12.0)
> f(2.0+3.0im)
>
>
>
>  
> On Dec 11, 2014, at 2:18 PM, samoconnor > 
> wrote:
>
> The example below has two modules that define methods of function f for 
> different parameter types.
> Both modules are imported.
> It seems like that "using" the second module causes the first one to 
> disappear.
> Is that the intended behaviour?
>
>
> !/Applications/Julia-0.3.0-rc4.app/Contents/Resources/julia/bin/julia
>
> module m1
>
> export f
>
> f(x::String) = println("String: " * x)
> f(x) = println(" ?: " * string(x))
> end
>
>
> module m2
>
> export f
>
> f(x::Int)= println("   Int: " * string(x))
> end
>
> using m1
> using m2
>
> f(7)
> f("Foo")
>
> output:
>
>Int: 7
> ERROR: `f` has no method matching f(::ASCIIString)
>
>
>
>

[julia-users] `;` output supressor behaves oddly with commented lines.

2014-12-11 Thread Ismael VC
Hello guys, do yo know if this is a bug or if the behavior 
changed? http://bit.ly/1vWmrHx 

`;` works to supress output in IJulia (0.3.3) *only* at the end of line, 
even if the line has a comment, and I can't get into JuliaBox to confirm if 
its the same on the REPL.


It works as I expect in my PC (0.3.2+2), but I can't install IJulia here to 
test if it´s the same as above in the notebook:

julia> @show x = 5
x = 5 => 5
5

julia> @show x = 5;
x = 5 => 5

julia> @show x = 5; # Yep!
x = 5 => 5

julia> versioninfo()
Julia Version 0.3.2+2
Commit 6babc84 (2014-10-22 01:21 UTC)
Platform Info:
  System: Windows (i686-w64-mingw32)
  CPU: AMD Athlon(tm) XP 2000+
  WORD_SIZE: 32
  BLAS: libopenblas (DYNAMIC_ARCH NO_AFFINITY Athlon)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3



Cheers!


Re: [julia-users] scope using let and local/global variables

2014-12-11 Thread Michael Mayo
My apologies, a let block *is* required otherwise the variables are being 
defined at global scope. The corrected function:

function run(assignments,program)
  block=Expr(:block)
  args=Expr[]
  for pair in assignments
append!(args, [Expr(:(=), pair[1], pair[2])])
  end
  append!(args,[program])
  block.args=args
  program_final=Expr(:let, block)
  eval(program_final)
end

Mike

On Friday, December 12, 2014 11:43:54 AM UTC+13, Michael Mayo wrote:
>
> Yes you are right! The final version with the strings removed:
>
> function run(assignments,program)
>   program_final=Expr(:block)
>   args=Expr[]
>   for pair in assignments
> append!(args, [Expr(:(=), pair[1], pair[2])])
>   end
>   append!(args,[program])
>   program_final.args=args
>   eval(program_final)
> end
>
> Turns out that making a let expression is not required; a simple block 
> expression does the job.
> Mike
>
> On Friday, December 12, 2014 11:07:42 AM UTC+13, Mike Innes wrote:
>>
>> Great that you got this working, but I strongly recommend working with 
>> expression objects here as opposed to strings. It's likely to be more 
>> robust and will mean you can use data that isn't parseable (i.e. most 
>> things other than numbers) as inputs.
>>
>> On 11 December 2014 at 21:40, Michael Mayo  wrote:
>>
>>> Thanks for both answers! I figured out a slightly different way of doing 
>>> it by putting the let assignments into a string with a "nothing" 
>>> expression, parsing the string, and then inserting the actual expression to 
>>> be evaluated into the correct place in the let block:
>>>
>>> function run(assignments,program)
>>>   program_string="let"
>>>   for pair in assignments
>>> program_string="$(program_string) $(pair[1])=$(pair[2]);"
>>>   end
>>>   program_string="$(program_string) nothing; end"
>>>   program_final=parse(program_string)
>>>   program_final.args[1].args[end]=program
>>>   eval(program_final)
>>> end
>>>
>>> I can now evaluate the same expression with different inputs in parallel 
>>> without worrying that they might conflict because all the variables are 
>>> local, e.g.:
>>>
>>> pmap(dict->run(dict,:(x+y*y)), [{:x=>2,:y=>5},{:x=>6,:y=>10}])
>>>
>>> *2-element Array{Any,1}:*
>>>
>>> *  27*
>>>
>>> * 106*
>>>
>>> Thanks for your help!
>>> Mike
>>>
>>>
>>>
>>> On Thursday, December 11, 2014 10:34:22 PM UTC+13, Mike Innes wrote:

 You can do this just fine, but you have to be explicit about what 
 variables you want to pass in, e.g.

 let x=2
   exp=:(x+1)
   eval(:(let x = $x; $exp; end))
 end

 If you want to call the expression with multiple inputs, wrap it in a 
 function:

 let x=2
   exp=:(x+1)
   f = eval(:(x -> $exp))
   f(x)
 end


 On 11 December 2014 at 06:32, Jameson Nash  wrote:

> I'm not quite sure what a genetic program of that sort would look 
> like. I would be interested to hear if you get something out of it.
>
> Another alternative is to use a module as the environment:
>
> module MyEnv
> end
> eval(MyEnv, :(code block))
>
> This is (roughly) how the REPL is implemented to work.
>
> On Thu Dec 11 2014 at 1:26:57 AM Michael Mayo  
> wrote:
>
>> Thanks, but its not quite what I'm looking for. I want to be able to 
>> edit the Expr tree and then evaluate different expressions using 
>> variables 
>> defined in the local scope,not the global scope (e.g. for genetic 
>> programming, where random changes to an expression are repeatedly 
>> evaluated 
>> to find the best one). Using anonymous functions could work but 
>> modifying 
>> the .code property of an anonymous function looks much more complex than 
>> modifying the Expr types.
>>
>> Anyway thanks for your answer, maybe your suggestion is the only 
>> possible way to achieve this!
>>
>> Mike 
>>
>>
>> On Thursday, December 11, 2014 6:56:15 PM UTC+13, Jameson wrote:
>>
>>> eval, by design, doesn't work that way. there are just too many 
>>> better alternatives. typically, an anonymous function / lambda is the 
>>> best 
>>> and most direct replacement:
>>>
>>> let x=2
>>>   println(x)# Line 1
>>>   exp = () -> x+1
>>>   println(exp())# Line 2
>>> end
>>>
>>>
>>> On Wed Dec 10 2014 at 10:43:00 PM Michael Mayo  
>>> wrote:
>>>
 Hi folks,

 I have the following code fragment:

 x=1
 let x=2
   println(x)# Line 1
   exp=:(x+1)
   println(eval(exp))# Line 2
 end

 It contains two variables both named x, one inside the scope 
 defined by let, and one at global scope.

 If I run this code the output is:
 2
 2

 This indicates that (i) that line 1 is using the local version of 
 x, and (ii) t

Re: [julia-users] Need help to understand method dispatch with modules

2014-12-11 Thread Rob J. Goedman
Sam,

Maybe below slightly expanded version of your example will help.

I think key is to import m1.f in module m2

Regards
Rob J. Goedman
goed...@mac.com


module m1

  export f

  f(x::ASCIIString) = println("ASCIIString: " * x)
  f{T<:String}(x::T) = println("  $(typeof(x)): " * x)
  f(x) = println("  $(typeof(x)): " * string(x))
end


module m2

  import m1.f
  export f

  f(x::Int)= println("Int: " * string(x))
end

using m1
using m2

f(7)
f("Foo")
f("\u2200 x \u2203 y")
f(12.0)
f(2.0+3.0im)




> On Dec 11, 2014, at 2:18 PM, samoconnor  wrote:
> 
> The example below has two modules that define methods of function f for 
> different parameter types.
> Both modules are imported.
> It seems like that "using" the second module causes the first one to 
> disappear.
> Is that the intended behaviour?
> 
> 
> !/Applications/Julia-0.3.0-rc4.app/Contents/Resources/julia/bin/julia
> 
> module m1
> 
> export f
> 
> f(x::String) = println("String: " * x)
> f(x) = println(" ?: " * string(x))
> end
> 
> 
> module m2
> 
> export f
> 
> f(x::Int)= println("   Int: " * string(x))
> end
> 
> using m1
> using m2
> 
> f(7)
> f("Foo")
> 
> output:
> 
>Int: 7
> ERROR: `f` has no method matching f(::ASCIIString)
> 
> 



Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Robert Gates
Yeah, I think I figured it out on my own, hence the message deletion. 
Nonetheless, I don't see your comment.

On Thursday, December 11, 2014 11:29:15 PM UTC+1, Andreas Noack wrote:
>
> I wrote a comment in the gist.
>
> 2014-12-11 17:08 GMT-05:00 Robert Gates 
> >:
>
>> In any case, this does make me wonder what is going on under the hood... 
>> I would not call the vectorized code "vectorized". IMHO, this should just 
>> pass to BLAS without overhead. Something appears to be creating a bunch of 
>> temporaries.
>>
>> On Thursday, December 11, 2014 5:47:01 PM UTC+1, Petr Krysl wrote:
>>
>>> Acting upon the advice that replacing matrix-matrix multiplications in 
>>> vectorized form with loops would help with performance, I chopped out a 
>>> piece of code from my finite element solver (https://gist.github.com/
>>> anonymous/4ec426096c02faa4354d) and ran some tests with the following 
>>> results:
>>>
>>> Vectorized code:
>>> elapsed time: 0.326802682 seconds (134490340 bytes allocated, 17.06% gc 
>>> time)
>>>
>>> Loops code:
>>> elapsed time: 4.681451441 seconds (997454276 bytes allocated, 9.05% gc 
>>> time) 
>>>
>>> SLOWER and using MORE memory?!
>>>
>>> I must be doing something terribly wrong.
>>>
>>> Petr
>>>
>>>
>

Re: [julia-users] scope using let and local/global variables

2014-12-11 Thread Michael Mayo
Yes you are right! The final version with the strings removed:

function run(assignments,program)
  program_final=Expr(:block)
  args=Expr[]
  for pair in assignments
append!(args, [Expr(:(=), pair[1], pair[2])])
  end
  append!(args,[program])
  program_final.args=args
  eval(program_final)
end

Turns out that making a let expression is not required; a simple block 
expression does the job.
Mike

On Friday, December 12, 2014 11:07:42 AM UTC+13, Mike Innes wrote:
>
> Great that you got this working, but I strongly recommend working with 
> expression objects here as opposed to strings. It's likely to be more 
> robust and will mean you can use data that isn't parseable (i.e. most 
> things other than numbers) as inputs.
>
> On 11 December 2014 at 21:40, Michael Mayo  > wrote:
>
>> Thanks for both answers! I figured out a slightly different way of doing 
>> it by putting the let assignments into a string with a "nothing" 
>> expression, parsing the string, and then inserting the actual expression to 
>> be evaluated into the correct place in the let block:
>>
>> function run(assignments,program)
>>   program_string="let"
>>   for pair in assignments
>> program_string="$(program_string) $(pair[1])=$(pair[2]);"
>>   end
>>   program_string="$(program_string) nothing; end"
>>   program_final=parse(program_string)
>>   program_final.args[1].args[end]=program
>>   eval(program_final)
>> end
>>
>> I can now evaluate the same expression with different inputs in parallel 
>> without worrying that they might conflict because all the variables are 
>> local, e.g.:
>>
>> pmap(dict->run(dict,:(x+y*y)), [{:x=>2,:y=>5},{:x=>6,:y=>10}])
>>
>> *2-element Array{Any,1}:*
>>
>> *  27*
>>
>> * 106*
>>
>> Thanks for your help!
>> Mike
>>
>>
>>
>> On Thursday, December 11, 2014 10:34:22 PM UTC+13, Mike Innes wrote:
>>>
>>> You can do this just fine, but you have to be explicit about what 
>>> variables you want to pass in, e.g.
>>>
>>> let x=2
>>>   exp=:(x+1)
>>>   eval(:(let x = $x; $exp; end))
>>> end
>>>
>>> If you want to call the expression with multiple inputs, wrap it in a 
>>> function:
>>>
>>> let x=2
>>>   exp=:(x+1)
>>>   f = eval(:(x -> $exp))
>>>   f(x)
>>> end
>>>
>>>
>>> On 11 December 2014 at 06:32, Jameson Nash  wrote:
>>>
 I'm not quite sure what a genetic program of that sort would look like. 
 I would be interested to hear if you get something out of it.

 Another alternative is to use a module as the environment:

 module MyEnv
 end
 eval(MyEnv, :(code block))

 This is (roughly) how the REPL is implemented to work.

 On Thu Dec 11 2014 at 1:26:57 AM Michael Mayo  
 wrote:

> Thanks, but its not quite what I'm looking for. I want to be able to 
> edit the Expr tree and then evaluate different expressions using 
> variables 
> defined in the local scope,not the global scope (e.g. for genetic 
> programming, where random changes to an expression are repeatedly 
> evaluated 
> to find the best one). Using anonymous functions could work but modifying 
> the .code property of an anonymous function looks much more complex than 
> modifying the Expr types.
>
> Anyway thanks for your answer, maybe your suggestion is the only 
> possible way to achieve this!
>
> Mike 
>
>
> On Thursday, December 11, 2014 6:56:15 PM UTC+13, Jameson wrote:
>
>> eval, by design, doesn't work that way. there are just too many 
>> better alternatives. typically, an anonymous function / lambda is the 
>> best 
>> and most direct replacement:
>>
>> let x=2
>>   println(x)# Line 1
>>   exp = () -> x+1
>>   println(exp())# Line 2
>> end
>>
>>
>> On Wed Dec 10 2014 at 10:43:00 PM Michael Mayo  
>> wrote:
>>
>>> Hi folks,
>>>
>>> I have the following code fragment:
>>>
>>> x=1
>>> let x=2
>>>   println(x)# Line 1
>>>   exp=:(x+1)
>>>   println(eval(exp))# Line 2
>>> end
>>>
>>> It contains two variables both named x, one inside the scope defined 
>>> by let, and one at global scope.
>>>
>>> If I run this code the output is:
>>> 2
>>> 2
>>>
>>> This indicates that (i) that line 1 is using the local version of x, 
>>> and (ii) that line 2 is using the global version of x.
>>>
>>> If I remove this global x I now get an error because eval() is 
>>> looking for the global x which no longer exists:
>>>
>>> let x=2
>>>   println(x)# Line 1
>>>   exp=:(x+1)
>>>   println(eval(exp))# Line 2
>>> end
>>>
>>> 2
>>>
>>> ERROR: x not defined
>>>
>>>
>>> My question: when evaluating an expression using eval() such as line 
>>> 2, how can I force Julia to use the local (not global) version of x and 
>>> thus avoid this error?
>>>
>>>
>>> Thanks
>>>
>>

Re: [julia-users] Help with types (Arrays)

2014-12-11 Thread elextr
And it is emphasised in the 
manual 
http://docs.julialang.org/en/release-0.3/manual/types/#man-parametric-types.

On Friday, December 12, 2014 6:34:10 AM UTC+10, Isaiah wrote:
>
> I suggest to start with this recent thread, and there are some others if 
> you search for "covariance" on the list archives:
>
> https://groups.google.com/d/msg/julia-users/dEnCJ-nxAGE/1Bw4a1UvFCEJ
>
> (if it is still unclear how to do what you need after that, feel free to 
> ask)
>
>
>
>
>
>
> On Thu, Dec 11, 2014 at 3:26 PM, S > wrote:
>
>>
>>
>> Hi all - very new to julia and am trying to wrap my head around multiple 
>> dispatch.
>>
>> I would think that because Int64 <: Integer, that an array of Int64 is an 
>> array of Integer. However, that doesn't appear to be the case.
>>
>> That is, with c = [1,2,3,4], I can't create a constructor function 
>> foo(myarr::Array{Integer,1}) - and Vector{Integer} doesn't work either.
>>
>> How do I create a constructor that takes a vector of Integers (of any 
>> subtype) - and only integers - and operates on the elements?
>>
>>
>> julia> c = [1,2,3,4]
>>
>> julia> c
>> 4-element Array{Int64,1}:
>>  1
>>  2
>>  3
>>  4
>>
>> julia> isa(c,Vector)
>> true
>>
>> julia> isa(c,Vector{Integer})
>> false
>>
>> julia> isa(c,Vector{Int64})
>> true
>>
>> julia> isa(c,Array{Integer})
>> false
>>
>
>

Re: [julia-users] Re: EACCESS errors when starting julia 0.3.3

2014-12-11 Thread Robbin Bonthond
https://github.com/JuliaLang/julia/issues/9319 has been filed

On Thursday, December 11, 2014 3:07:06 PM UTC-6, Stefan Karpinski wrote:
>
> On Thu, Dec 11, 2014 at 4:05 PM, Robbin Bonthond  > wrote:
>
>> for some reason the binary installer of julia uses restricted group 
>> permissions
>
>
> That sounds like a potential problem – would you mind filing an issue?
>
> https://github.com/JuliaLang/julia/issues
>


Re: [julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Andreas Noack
I wrote a comment in the gist.

2014-12-11 17:08 GMT-05:00 Robert Gates :

> In any case, this does make me wonder what is going on under the hood... I
> would not call the vectorized code "vectorized". IMHO, this should just
> pass to BLAS without overhead. Something appears to be creating a bunch of
> temporaries.
>
> On Thursday, December 11, 2014 5:47:01 PM UTC+1, Petr Krysl wrote:
>
>> Acting upon the advice that replacing matrix-matrix multiplications in
>> vectorized form with loops would help with performance, I chopped out a
>> piece of code from my finite element solver (https://gist.github.com/
>> anonymous/4ec426096c02faa4354d) and ran some tests with the following
>> results:
>>
>> Vectorized code:
>> elapsed time: 0.326802682 seconds (134490340 bytes allocated, 17.06% gc
>> time)
>>
>> Loops code:
>> elapsed time: 4.681451441 seconds (997454276 bytes allocated, 9.05% gc
>> time)
>>
>> SLOWER and using MORE memory?!
>>
>> I must be doing something terribly wrong.
>>
>> Petr
>>
>>


Re: [julia-users] Problem with Domain Error in an aggregator function

2014-12-11 Thread Andreas Noack
I think the easiest solution would be to store the result in a matrix and
save that with writedlm.

2014-12-11 17:07 GMT-05:00 Pileas :

> Andreas thanks, you were right about the consumption. I changed the
> example and it worked. My only complain is that I am not able to save my
> results the way I want.
>
> See for example the modified code:
>
> --
> m = 10;
> n = 5;
>
> c1 = abs(randn(m));
> c2 = abs(randn(n));
> cm = Array(Float64, m*n);
>
> csvfile = open("Dixit_Stiglitz.csv","w")
> write(csvfile,"c1,c2,cm, \n")
>
> for i = 1:m
> for j = 1:n
> cm = (c1[i]^(0.3/1.3) + c2[j]^(0.3/1.3))^(1.3/0.3)
> println(csvfile, join(c1, c2, cm), ",")
> end
> end
> -
>
> What I want to do is to have a csv file of the following format (here for
> simplicity c1 = (1, 2) and c2 = (1,  2, 3))
>
> c1  c2cm
> ---
> 1   1   cm(1,1)
> 2   1   cm(2,1)
> 1   2   cm(1,2)
> 2   2   cm(2,2)
> 1   3   cm(1,3)
> 2   3   cm(2,3)
>
> and eventually, with the above numbers I want to plot the 3D graph. But
> the code that I gave does not show the results well.
>
> Any help will be appreciated.
>
> Thanks
>
> Τη Πέμπτη, 11 Δεκεμβρίου 2014 4:33:10 μ.μ. UTC-5, ο χρήστης Andreas Noack
> έγραψε:
>>
>> The problem is the non-integer power of a negative number, so you'll have
>> to restrict the consumption to be positive.
>>
>> 2014-12-11 16:29 GMT-05:00 Pileas :
>>
>>> Hello all,
>>>
>>> I have this function from which I want to make a 3D figure. I have some
>>> problems though, because I get a domain error. I am sure it must be a
>>> stupid mistake or something that I do not understand ... but I cannot
>>> figure out what I am doing wrong.
>>>
>>> So, the function is: C_M = ((c_1^(0.3/1.3) + c_2^(0.3/1.3))^(1.3/0.3)
>>>
>>> To my understanding, in order to make a 3D graph I need to keep one
>>> dimension constant while the other changes and gives values to C_M until
>>> all combinations in both directions have been exhausted (at least in the
>>> domain that I set). Eventually you get triplets of the form (c1, c2, cm).
>>>
>>> Here is the code:
>>>
>>> 
>>> --
>>> *m = 100;# 100 points in each direction*
>>> *n = 100; **# 100 points in each direction*
>>>
>>> *c1 = randn(m);*
>>> *c2 = randn(n);*
>>> *cm = zeros(m*n);  # Cartesian product *
>>>
>>> *csvfile = open("graph.csv","w")*
>>> *write(csvfile,"c1,c2,cm, \n")*
>>>
>>> *for i = 1:m*
>>> *for j = 1:n*
>>> *   cm[] = (c1[i]^(0.3/1.3) + c2[j]^(0.3/1.3))^(1.3/0.3)*
>>> *end*
>>> *write(csvfile, join(graph,","), "\n")*
>>> *end*
>>>
>>> 
>>> --
>>>
>>> P.S. I want to save the results and then use PyPlot to make the graph.
>>>
>>> Thank you for your time.
>>>
>>
>>


Re: [julia-users] LLVM3.2 and JULIA BUILD PROBLEM

2014-12-11 Thread Stefan Karpinski
LLVM 3.2 is no longer supported – the default Julia version of LLVM is 3.3.


> On Dec 11, 2014, at 4:58 PM, John Myles White  
> wrote:
> 
> My understanding is that different versions of LLVM are enormously different 
> and that there's no safe way to make Julia work with any version of LLVM 
> other than the intended one.
> 
>  -- John
> 
>> On Dec 11, 2014, at 4:56 PM, Vehbi Eşref Bayraktar 
>>  wrote:
>> 
>> Hi;
>> 
>> I am using llvm 3.2 with libnvvm . However when i try to build julia using 
>> those 2 flags :
>> USE_SYSTEM_LLVM = 1
>> USE_LLVM_SHLIB = 1
>> 
>> I have a bunch of errors. starting as following:
>> 
>> codegen.cpp: In function ‘void jl_init_codegen()’:
>> codegen.cpp:4886:26: error: ‘getProcessTriple’ is not a member of ‘llvm::sys’
>>  Triple TheTriple(sys::getProcessTriple()); // llvm32 doesn't have 
>> this one instead it has getDefaultTargetTriple()
>>   ^
>> codegen.cpp:4919:5: error: ‘mbuilder’ was not declared in this scope
>>  mbuilder = new MDBuilder(getGlobalContext());  //  include 
>>  would fix this
>>  ^
>> codegen.cpp:4919:20: error: expected type-specifier before ‘MDBuilder’
>>  mbuilder = new MDBuilder(getGlobalContext());
>> 
>> Even you fix these errors, you keep hitting the following ones:
>> In file included from codegen.cpp:976:0:
>> intrinsics.cpp: In function ‘llvm::Value* emit_intrinsic(JL_I::intrinsic, 
>> jl_value_t**, size_t, jl_codectx_t*)’:
>> intrinsics.cpp:1158:72: error: ‘ceil’ is not a member of ‘llvm::Intrinsic’
>>  return builder.CreateCall(Intrinsic::getDeclaration(jl_Module, 
>> Intrinsic::ceil,
>> 
>> 
>> 
>> So is the master branch currently supporting llvm32? Or is there a patch 
>> somewhere?
>> 
>> Thanks
> 


[julia-users] Need help to understand method dispatch with modules

2014-12-11 Thread samoconnor
The example below has two modules that define methods of function f for 
different parameter types.
Both modules are imported.
It seems like that "using" the second module causes the first one to 
disappear.
Is that the intended behaviour?


!/Applications/Julia-0.3.0-rc4.app/Contents/Resources/julia/bin/julia

module m1

export f

f(x::String) = println("String: " * x)
f(x) = println(" ?: " * string(x))
end


module m2

export f

f(x::Int)= println("   Int: " * string(x))
end

using m1
using m2

f(7)
f("Foo")

output:

   Int: 7
ERROR: `f` has no method matching f(::ASCIIString)




[julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Robert Gates
In any case, this does make me wonder what is going on under the hood... I 
would not call the vectorized code "vectorized". IMHO, this should just 
pass to BLAS without overhead. Something appears to be creating a bunch of 
temporaries.

On Thursday, December 11, 2014 5:47:01 PM UTC+1, Petr Krysl wrote:
>
> Acting upon the advice that replacing matrix-matrix multiplications in 
> vectorized form with loops would help with performance, I chopped out a 
> piece of code from my finite element solver (
> https://gist.github.com/anonymous/4ec426096c02faa4354d) and ran some 
> tests with the following results:
>
> Vectorized code:
> elapsed time: 0.326802682 seconds (134490340 bytes allocated, 17.06% gc 
> time)
>
> Loops code:
> elapsed time: 4.681451441 seconds (997454276 bytes allocated, 9.05% gc 
> time) 
>
> SLOWER and using MORE memory?!
>
> I must be doing something terribly wrong.
>
> Petr
>
>

Re: [julia-users] scope using let and local/global variables

2014-12-11 Thread Mike Innes
Great that you got this working, but I strongly recommend working with
expression objects here as opposed to strings. It's likely to be more
robust and will mean you can use data that isn't parseable (i.e. most
things other than numbers) as inputs.

On 11 December 2014 at 21:40, Michael Mayo  wrote:

> Thanks for both answers! I figured out a slightly different way of doing
> it by putting the let assignments into a string with a "nothing"
> expression, parsing the string, and then inserting the actual expression to
> be evaluated into the correct place in the let block:
>
> function run(assignments,program)
>   program_string="let"
>   for pair in assignments
> program_string="$(program_string) $(pair[1])=$(pair[2]);"
>   end
>   program_string="$(program_string) nothing; end"
>   program_final=parse(program_string)
>   program_final.args[1].args[end]=program
>   eval(program_final)
> end
>
> I can now evaluate the same expression with different inputs in parallel
> without worrying that they might conflict because all the variables are
> local, e.g.:
>
> pmap(dict->run(dict,:(x+y*y)), [{:x=>2,:y=>5},{:x=>6,:y=>10}])
>
> *2-element Array{Any,1}:*
>
> *  27*
>
> * 106*
>
> Thanks for your help!
> Mike
>
>
>
> On Thursday, December 11, 2014 10:34:22 PM UTC+13, Mike Innes wrote:
>>
>> You can do this just fine, but you have to be explicit about what
>> variables you want to pass in, e.g.
>>
>> let x=2
>>   exp=:(x+1)
>>   eval(:(let x = $x; $exp; end))
>> end
>>
>> If you want to call the expression with multiple inputs, wrap it in a
>> function:
>>
>> let x=2
>>   exp=:(x+1)
>>   f = eval(:(x -> $exp))
>>   f(x)
>> end
>>
>>
>> On 11 December 2014 at 06:32, Jameson Nash  wrote:
>>
>>> I'm not quite sure what a genetic program of that sort would look like.
>>> I would be interested to hear if you get something out of it.
>>>
>>> Another alternative is to use a module as the environment:
>>>
>>> module MyEnv
>>> end
>>> eval(MyEnv, :(code block))
>>>
>>> This is (roughly) how the REPL is implemented to work.
>>>
>>> On Thu Dec 11 2014 at 1:26:57 AM Michael Mayo 
>>> wrote:
>>>
 Thanks, but its not quite what I'm looking for. I want to be able to
 edit the Expr tree and then evaluate different expressions using variables
 defined in the local scope,not the global scope (e.g. for genetic
 programming, where random changes to an expression are repeatedly evaluated
 to find the best one). Using anonymous functions could work but modifying
 the .code property of an anonymous function looks much more complex than
 modifying the Expr types.

 Anyway thanks for your answer, maybe your suggestion is the only
 possible way to achieve this!

 Mike


 On Thursday, December 11, 2014 6:56:15 PM UTC+13, Jameson wrote:

> eval, by design, doesn't work that way. there are just too many better
> alternatives. typically, an anonymous function / lambda is the best and
> most direct replacement:
>
> let x=2
>   println(x)# Line 1
>   exp = () -> x+1
>   println(exp())# Line 2
> end
>
>
> On Wed Dec 10 2014 at 10:43:00 PM Michael Mayo 
> wrote:
>
>> Hi folks,
>>
>> I have the following code fragment:
>>
>> x=1
>> let x=2
>>   println(x)# Line 1
>>   exp=:(x+1)
>>   println(eval(exp))# Line 2
>> end
>>
>> It contains two variables both named x, one inside the scope defined
>> by let, and one at global scope.
>>
>> If I run this code the output is:
>> 2
>> 2
>>
>> This indicates that (i) that line 1 is using the local version of x,
>> and (ii) that line 2 is using the global version of x.
>>
>> If I remove this global x I now get an error because eval() is
>> looking for the global x which no longer exists:
>>
>> let x=2
>>   println(x)# Line 1
>>   exp=:(x+1)
>>   println(eval(exp))# Line 2
>> end
>>
>> 2
>>
>> ERROR: x not defined
>>
>>
>> My question: when evaluating an expression using eval() such as line
>> 2, how can I force Julia to use the local (not global) version of x and
>> thus avoid this error?
>>
>>
>> Thanks
>>
>> Mike
>>
>
>>


Re: [julia-users] Problem with Domain Error in an aggregator function

2014-12-11 Thread Pileas
Andreas thanks, you were right about the consumption. I changed the example 
and it worked. My only complain is that I am not able to save my results 
the way I want.

See for example the modified code:

--
m = 10;
n = 5;

c1 = abs(randn(m));
c2 = abs(randn(n));
cm = Array(Float64, m*n);

csvfile = open("Dixit_Stiglitz.csv","w")
write(csvfile,"c1,c2,cm, \n")

for i = 1:m
for j = 1:n
cm = (c1[i]^(0.3/1.3) + c2[j]^(0.3/1.3))^(1.3/0.3)
println(csvfile, join(c1, c2, cm), ",")
end
end
-

What I want to do is to have a csv file of the following format (here for 
simplicity c1 = (1, 2) and c2 = (1,  2, 3))

c1  c2cm
---
1   1   cm(1,1)
2   1   cm(2,1)
1   2   cm(1,2)
2   2   cm(2,2)
1   3   cm(1,3)
2   3   cm(2,3)

and eventually, with the above numbers I want to plot the 3D graph. But the 
code that I gave does not show the results well.

Any help will be appreciated.

Thanks

Τη Πέμπτη, 11 Δεκεμβρίου 2014 4:33:10 μ.μ. UTC-5, ο χρήστης Andreas Noack 
έγραψε:
>
> The problem is the non-integer power of a negative number, so you'll have 
> to restrict the consumption to be positive.
>
> 2014-12-11 16:29 GMT-05:00 Pileas >:
>
>> Hello all,
>>
>> I have this function from which I want to make a 3D figure. I have some 
>> problems though, because I get a domain error. I am sure it must be a 
>> stupid mistake or something that I do not understand ... but I cannot 
>> figure out what I am doing wrong.
>>
>> So, the function is: C_M = ((c_1^(0.3/1.3) + c_2^(0.3/1.3))^(1.3/0.3)
>>
>> To my understanding, in order to make a 3D graph I need to keep one 
>> dimension constant while the other changes and gives values to C_M until 
>> all combinations in both directions have been exhausted (at least in the 
>> domain that I set). Eventually you get triplets of the form (c1, c2, cm).
>>
>> Here is the code:
>>
>> --
>> *m = 100;# 100 points in each direction*
>> *n = 100; **# 100 points in each direction*
>>
>> *c1 = randn(m);*
>> *c2 = randn(n);*
>> *cm = zeros(m*n);  # Cartesian product *
>>
>> *csvfile = open("graph.csv","w")*
>> *write(csvfile,"c1,c2,cm, \n")*
>>
>> *for i = 1:m*
>> *for j = 1:n*
>> *   cm[] = (c1[i]^(0.3/1.3) + c2[j]^(0.3/1.3))^(1.3/0.3)*
>> *end*
>> *write(csvfile, join(graph,","), "\n")*
>> *end*
>>
>> --
>>
>> P.S. I want to save the results and then use PyPlot to make the graph.
>>
>> Thank you for your time.
>>
>
>

[julia-users] Re: Aren't loops supposed to be faster?

2014-12-11 Thread Robert Gates
Hi Petr,

I just tried the devectorized problem, although I did choose to go a bit of 
a different route: https://gist.github.com/rleegates/2d99e6251fe246b017ac   
I am not sure that this is what you intended, however, using the vectorized 
code as a reference, I do obtain the same results up to machine epsilon.

Anyways, I got:

In  [4]: keTest(200_000)
Vectorized:
elapsed time: 0.426404203 seconds (140804768 bytes allocated, 22.42% gc 
time)
DeVectorized:
elapsed time: 0.078519349 seconds (128 bytes allocated)
DeVectorized InBounds:
elapsed time: 0.032812311 seconds (128 bytes allocated)
Error norm deVec: 0.0
Error norm inBnd: 0.0

On Thursday, December 11, 2014 5:47:01 PM UTC+1, Petr Krysl wrote:
>
> Acting upon the advice that replacing matrix-matrix multiplications in 
> vectorized form with loops would help with performance, I chopped out a 
> piece of code from my finite element solver (
> https://gist.github.com/anonymous/4ec426096c02faa4354d) and ran some 
> tests with the following results:
>
> Vectorized code:
> elapsed time: 0.326802682 seconds (134490340 bytes allocated, 17.06% gc 
> time)
>
> Loops code:
> elapsed time: 4.681451441 seconds (997454276 bytes allocated, 9.05% gc 
> time) 
>
> SLOWER and using MORE memory?!
>
> I must be doing something terribly wrong.
>
> Petr
>
>

Re: [julia-users] LLVM3.2 and JULIA BUILD PROBLEM

2014-12-11 Thread John Myles White
My understanding is that different versions of LLVM are enormously different 
and that there's no safe way to make Julia work with any version of LLVM other 
than the intended one.

 -- John

On Dec 11, 2014, at 4:56 PM, Vehbi Eşref Bayraktar 
 wrote:

> Hi;
> 
> I am using llvm 3.2 with libnvvm . However when i try to build julia using 
> those 2 flags :
> USE_SYSTEM_LLVM = 1
> USE_LLVM_SHLIB = 1
> 
> I have a bunch of errors. starting as following:
> 
> codegen.cpp: In function ‘void jl_init_codegen()’:
> codegen.cpp:4886:26: error: ‘getProcessTriple’ is not a member of ‘llvm::sys’
>  Triple TheTriple(sys::getProcessTriple()); // llvm32 doesn't have 
> this one instead it has getDefaultTargetTriple()
>   ^
> codegen.cpp:4919:5: error: ‘mbuilder’ was not declared in this scope
>  mbuilder = new MDBuilder(getGlobalContext());  //  include 
>  would fix this
>  ^
> codegen.cpp:4919:20: error: expected type-specifier before ‘MDBuilder’
>  mbuilder = new MDBuilder(getGlobalContext());
> 
> Even you fix these errors, you keep hitting the following ones:
> In file included from codegen.cpp:976:0:
> intrinsics.cpp: In function ‘llvm::Value* emit_intrinsic(JL_I::intrinsic, 
> jl_value_t**, size_t, jl_codectx_t*)’:
> intrinsics.cpp:1158:72: error: ‘ceil’ is not a member of ‘llvm::Intrinsic’
>  return builder.CreateCall(Intrinsic::getDeclaration(jl_Module, 
> Intrinsic::ceil,
> 
> 
> 
> So is the master branch currently supporting llvm32? Or is there a patch 
> somewhere?
> 
> Thanks



[julia-users] LLVM3.2 and JULIA BUILD PROBLEM

2014-12-11 Thread Vehbi Eşref Bayraktar
Hi;

I am using llvm 3.2 with libnvvm . However when i try to build julia using 
those 2 flags :
USE_SYSTEM_LLVM = 1
USE_LLVM_SHLIB = 1

I have a bunch of errors. starting as following:

codegen.cpp: In function ‘void jl_init_codegen()’:
codegen.cpp:4886:26: error: ‘getProcessTriple’ is not a member of 
‘llvm::sys’
 Triple TheTriple(sys::getProcessTriple()); // *llvm32 doesn't have 
this one instead it has getDefaultTargetTriple()*
  ^
codegen.cpp:4919:5: error: ‘mbuilder’ was not declared in this scope
 mbuilder = new MDBuilder(getGlobalContext());  //  *include 
 would fix this*
 ^
codegen.cpp:4919:20: error: expected type-specifier before ‘MDBuilder’
 mbuilder = new MDBuilder(getGlobalContext());

Even you fix these errors, you keep hitting the following ones:
In file included from codegen.cpp:976:0:
intrinsics.cpp: In function ‘llvm::Value* emit_intrinsic(JL_I::intrinsic, 
jl_value_t**, size_t, jl_codectx_t*)’:
intrinsics.cpp:1158:72: error: ‘ceil’ is not a member of ‘llvm::Intrinsic’
 return builder.CreateCall(Intrinsic::getDeclaration(jl_Module, 
Intrinsic::ceil,



So is the master branch currently supporting llvm32? Or is there a patch 
somewhere?

Thanks


Re: [julia-users] Initializing a SharedArray Memory Error

2014-12-11 Thread benFranklin
I have noticed that these remote references can't be fetched:

fetch(zeroMatrix.refs[1]) 

 the driver process just waits until infinity, so I'm thinking that the 
remotecall_wait() 
in 
https://github.com/JuliaLang/julia/blob/f3c355115ab02868ac644a5561b788fc16738443/base/sharedarray.jl#L96
 
exit before it should. Any ideas?

On Wednesday, 10 December 2014 13:47:19 UTC-5, benFranklin wrote:
>
> I think you are right about some references not being released yet:
>
> If I change the while loop to include you way of replacing every 
> reference, the put! actually never gets executed, it just waits:
>
> while true
> zeroMatrix = 
> SharedArray(Float64,(nQ,nQ,3,nQ,nQ,nQ),pids=workers(), init = x->inF(x,nQ))
> println("ran!")
>
> for i = 1:length(zeroMatrix.refs) 
> put!(zeroMatrix.refs[i], 1) 
> end 
> @everywhere gc()
>end
> ran!
> 
>
> Runs once and stalls, after C-c:
>
>
> ^CERROR: interrupt
>  in process_events at /usr/bin/../lib64/julia/sys.so
>  in wait at /usr/bin/../lib64/julia/sys.so (repeats 2 times)
>  in wait_full at /usr/bin/../lib64/julia/sys.so
> 
> After C-d
>
> julia> 
>
> WARNING: Forcibly interrupting busy workers
> error in running finalizer: InterruptException()
> error in running finalizer: InterruptException()
> WARNING: Unable to terminate all workers
> [...]
>
>
> It seems after the init function not all workers are "done". I'll see if 
> there's something weird with that part, but if the SharedArray is being 
> returned, I don't see any reason for this to be so.
>
>
>
> On Wednesday, 10 December 2014 05:19:55 UTC-5, Tim Holy wrote:
>>
>> After your gc() it should be able to be unmapped, see 
>>
>> https://github.com/JuliaLang/julia/blob/f3c355115ab02868ac644a5561b788fc16738443/base/mmap.jl#L113
>>  
>>
>> My guess is something in the parallel architecture is holding a 
>> reference. 
>> Have you tried going at this systematically from the internal 
>> representation 
>> of the SharedArray? For example, I might consider trying to put! new 
>> stuff in 
>> zeroMatrix.refs: 
>>
>> for i = 1:length(zeroMatrix.refs) 
>> put!(zeroMatrix.refs[i], 1) 
>> end 
>>
>> before calling gc(). I don't know if this will work, but it's where I'd 
>> start 
>> experimenting. 
>>
>> If you can fix this, please do submit a pull request. 
>>
>> Best, 
>> --Tim 
>>
>> On Tuesday, December 09, 2014 08:06:10 PM ele...@gmail.com wrote: 
>> > On Wednesday, December 10, 2014 12:28:29 PM UTC+10, benFranklin wrote: 
>> > > I've made a small example of the memory problems I've been running 
>> into. I 
>> > > can't find a way to deallocate a SharedArray, 
>> > 
>> > Someone more expert might find it, but I can't see anywhere that the 
>> > mmapped memory is unmapped. 
>> > 
>> > > if the code below runs once, it means the computer has enough memory 
>> to 
>> > > run this. If I can properly deallocate the memory I should be able to 
>> do 
>> > > it 
>> > > again, however, I run out of memory. Am I misunderstanding something 
>> about 
>> > > garbage collection in Julia? 
>> > > 
>> > > Thanks for your attention 
>> > > 
>> > > Code: 
>> > > 
>> > > @everywhere nQ = 60 
>> > > 
>> > > @everywhere function inF(x::SharedArray,nQ::Int64) 
>> > > 
>> > > number = myid()-1; 
>> > > targetLength = nQ*nQ*3 
>> > > 
>> > > startN = floor((number-1)*targetLength/nworkers()) + 1 
>> > > endN = floor(number*targetLength/nworkers()) 
>> > > 
>> > > myIndexes = int64(startN:endN) 
>> > > for j in myIndexes 
>> > > inds = ind2sub((nQ,nQ,nQ),j) 
>> > > x[inds[1],inds[2],inds[3],:,:,:] = rand(nQ,nQ,nQ) 
>> > > end 
>> > > 
>> > > 
>> > > end 
>> > > 
>> > > while true 
>> > > zeroMatrix = SharedArray(Float64,(nQ,nQ,3,nQ,nQ,nQ),pids=workers(), 
>> init = 
>> > > x->inF(x,nQ)) 
>> > > println("ran!") 
>> > > @everywhere zeroMatrix = 1 
>> > > @everywhere gc() 
>> > > end 
>> > > 
>> > > On Monday, 8 December 2014 23:43:03 UTC-5, Isaiah wrote: 
>> > >> Hopefully you will get an answer on pmap from someone more familiar 
>> with 
>> > >> the parallel stuff, but: have you tried splitting the init step? 
>> (see the 
>> > >> example in the manual for how to init an array in chunks done by 
>> > >> different 
>> > >> workers). Just guessing though: I'm not sure if/how those will be 
>> > >> serialized if each worker is contending for the whole array. 
>> > >> 
>> > >> On Fri, Dec 5, 2014 at 4:23 PM, benFranklin  
>> wrote: 
>> > >>> Hi all, I'm trying to figure out how to best initialize a 
>> SharedArray, 
>> > >>> using a C function to fill it up that computes a huge matrix in 
>> parts, 
>> > >>> and 
>> > >>> all comments are appreciated. To summarise: Is A, making an empty 
>> shared 
>> > >>> array, computing the matrix in parallel using pmap and then filling 
>> it 
>> > >>> up 
>> > >>> serially, better than using B, computing in parallel and storing in 
>> one 
>> > >>> step by using an init function in the SharedArray declaration? 
>> > >>> 
>> 

Re: [julia-users] Is there a function that performs (1:length(v))[v] for v a Vector{Bool} or a BitArray?

2014-12-11 Thread Douglas Bates

On Thursday, December 11, 2014 3:21:07 PM UTC-6, John Myles White wrote:
>
> Does find() work? 
>

Yes.  Thanks. 

>
>  -- John 
>
> On Dec 11, 2014, at 4:19 PM, Douglas Bates > 
> wrote: 
>
> > I realize it would be a one-liner to write one but, in the interests of 
> not reinventing the wheel, I wanted to ask if I had missed a function that 
> does this.  In R there is such a function called "which" but that name is 
> already taken in Julia for something else. 
>
>

Re: [julia-users] scope using let and local/global variables

2014-12-11 Thread Michael Mayo
Thanks for both answers! I figured out a slightly different way of doing it 
by putting the let assignments into a string with a "nothing" expression, 
parsing the string, and then inserting the actual expression to be 
evaluated into the correct place in the let block:

function run(assignments,program)
  program_string="let"
  for pair in assignments
program_string="$(program_string) $(pair[1])=$(pair[2]);"
  end
  program_string="$(program_string) nothing; end"
  program_final=parse(program_string)
  program_final.args[1].args[end]=program
  eval(program_final)
end

I can now evaluate the same expression with different inputs in parallel 
without worrying that they might conflict because all the variables are 
local, e.g.:

pmap(dict->run(dict,:(x+y*y)), [{:x=>2,:y=>5},{:x=>6,:y=>10}])

*2-element Array{Any,1}:*

*  27*

* 106*

Thanks for your help!
Mike



On Thursday, December 11, 2014 10:34:22 PM UTC+13, Mike Innes wrote:
>
> You can do this just fine, but you have to be explicit about what 
> variables you want to pass in, e.g.
>
> let x=2
>   exp=:(x+1)
>   eval(:(let x = $x; $exp; end))
> end
>
> If you want to call the expression with multiple inputs, wrap it in a 
> function:
>
> let x=2
>   exp=:(x+1)
>   f = eval(:(x -> $exp))
>   f(x)
> end
>
>
> On 11 December 2014 at 06:32, Jameson Nash 
> > wrote:
>
>> I'm not quite sure what a genetic program of that sort would look like. I 
>> would be interested to hear if you get something out of it.
>>
>> Another alternative is to use a module as the environment:
>>
>> module MyEnv
>> end
>> eval(MyEnv, :(code block))
>>
>> This is (roughly) how the REPL is implemented to work.
>>
>> On Thu Dec 11 2014 at 1:26:57 AM Michael Mayo > > wrote:
>>
>>> Thanks, but its not quite what I'm looking for. I want to be able to 
>>> edit the Expr tree and then evaluate different expressions using variables 
>>> defined in the local scope,not the global scope (e.g. for genetic 
>>> programming, where random changes to an expression are repeatedly evaluated 
>>> to find the best one). Using anonymous functions could work but modifying 
>>> the .code property of an anonymous function looks much more complex than 
>>> modifying the Expr types.
>>>
>>> Anyway thanks for your answer, maybe your suggestion is the only 
>>> possible way to achieve this!
>>>
>>> Mike 
>>>
>>>
>>> On Thursday, December 11, 2014 6:56:15 PM UTC+13, Jameson wrote:
>>>
 eval, by design, doesn't work that way. there are just too many better 
 alternatives. typically, an anonymous function / lambda is the best and 
 most direct replacement:

 let x=2
   println(x)# Line 1
   exp = () -> x+1
   println(exp())# Line 2
 end


 On Wed Dec 10 2014 at 10:43:00 PM Michael Mayo  
 wrote:

> Hi folks,
>
> I have the following code fragment:
>
> x=1
> let x=2
>   println(x)# Line 1
>   exp=:(x+1)
>   println(eval(exp))# Line 2
> end
>
> It contains two variables both named x, one inside the scope defined 
> by let, and one at global scope.
>
> If I run this code the output is:
> 2
> 2
>
> This indicates that (i) that line 1 is using the local version of x, 
> and (ii) that line 2 is using the global version of x.
>
> If I remove this global x I now get an error because eval() is looking 
> for the global x which no longer exists:
>
> let x=2
>   println(x)# Line 1
>   exp=:(x+1)
>   println(eval(exp))# Line 2
> end
>
> 2
>
> ERROR: x not defined
>
>
> My question: when evaluating an expression using eval() such as line 
> 2, how can I force Julia to use the local (not global) version of x and 
> thus avoid this error?
>
>
> Thanks
>
> Mike
>
  
>

Re: [julia-users] Broadcasting variables

2014-12-11 Thread Madeleine Udell
Amit and Blake, thanks for all your advice. I've managed to cobble together
a shared memory version of LowRankModels.jl, using the workarounds we
devised above. In case you're interested, it's at

https://github.com/madeleineudell/LowRankModels.jl/blob/master/src/shareglrm.jl

and you can run it using eg

julia -p 3 LowRankModels/examples/sharetest.jl

There's a significant overhead, but it's faster than the serial version for
large problem sizes. Any advice for reducing the overhead would be much
appreciated.

However, in that package I'm seeing some unexpected behavior: occasionally
it seems that some of the processes have not finished their jobs at the end
of an @everywhere block, although looking at the code for @everywhere I see
it's wrapped in a @sync already. Is there something else I can use to
synchronize (ie wait for completion of all) the processes?

On Tue, Dec 2, 2014 at 12:21 AM, Amit Murthy  wrote:

> Issue - https://github.com/JuliaLang/julia/issues/9219
>
> On Tue, Dec 2, 2014 at 10:04 AM, Amit Murthy 
> wrote:
>
>> From the documentation - "Modules in Julia are separate global variable
>> workspaces."
>>
>> So what is happening is that the anonymous function in "remotecall(i,
>> x->(global const X=x; nothing), localX)" creates X as module global.
>>
>> The following works:
>>
>> module ParallelStuff
>> export doparallelstuff
>>
>> function doparallelstuff()(m = 10, n = 20)
>> # initialize variables
>> localX = Base.shmem_rand(m; pids=procs())
>> localY = Base.shmem_rand(n; pids=procs())
>> localf = [x->i+sum(x) for i=1:m]
>> localg = [x->i+sum(x) for i=1:n]
>>
>> # broadcast variables to all worker processes (thanks to Amit Murthy
>> for suggesting this syntax)
>> @sync begin
>> for i in procs(localX)
>> remotecall(i, x->(global X=x; nothing), localX)
>> remotecall(i, x->(global Y=x; nothing), localY)
>> remotecall(i, x->(global f=x; nothing), localf)
>> remotecall(i, x->(global g=x; nothing), localg)
>> end
>> end
>>
>> # compute
>> for iteration=1:1
>> @everywhere begin
>> X=ParallelStuff.X
>> Y=ParallelStuff.Y
>> f=ParallelStuff.f
>> g=ParallelStuff.g
>> for i=localindexes(X)
>> X[i] = f[i](Y)
>> end
>> for j=localindexes(Y)
>> Y[j] = g[j](X)
>> end
>> end
>> end
>> end
>>
>> end #module
>>
>>
>> While remotecall, @everywhere, etc run under Main, the fact that the
>> closure variables refers to Module ParallelStuff is pretty confusing.
>> I think we need a better way to handle this.
>>
>>
>> On Tue, Dec 2, 2014 at 4:58 AM, Madeleine Udell <
>> madeleine.ud...@gmail.com> wrote:
>>
>>> Thanks to Blake and Amit for some excellent suggestions! Both strategies
>>> work fine when embedded in functions, but not when those functions are
>>> embedded in modules. For example, the following throws an error:
>>>
>>> @everywhere include("ParallelStuff.jl")
>>> @everywhere using ParallelStuff
>>> doparallelstuff()
>>>
>>> when ParallelStuff.jl contains the following code:
>>>
>>> module ParallelStuff
>>> export doparallelstuff
>>>
>>> function doparallelstuff(m = 10, n = 20)
>>> # initialize variables
>>> localX = Base.shmem_rand(m; pids=procs())
>>> localY = Base.shmem_rand(n; pids=procs())
>>> localf = [x->i+sum(x) for i=1:m]
>>> localg = [x->i+sum(x) for i=1:n]
>>>
>>> # broadcast variables to all worker processes (thanks to Amit Murthy
>>> for suggesting this syntax)
>>> @sync begin
>>> for i in procs(localX)
>>> remotecall(i, x->(global const X=x; nothing), localX)
>>> remotecall(i, x->(global const Y=x; nothing), localY)
>>> remotecall(i, x->(global const f=x; nothing), localf)
>>> remotecall(i, x->(global const g=x; nothing), localg)
>>> end
>>> end
>>>
>>> # compute
>>> for iteration=1:1
>>> @everywhere for i=localindexes(X)
>>> X[i] = f[i](Y)
>>> end
>>> @everywhere for j=localindexes(Y)
>>> Y[j] = g[j](X)
>>> end
>>> end
>>> end
>>>
>>> end #module
>>>
>>> On 3 processes (julia -p 3), the error is as follows:
>>>
>>> exception on 1: exception on 2: exception on 3: ERROR: X not defined
>>>  in anonymous at no file
>>>  in eval at
>>> /Users/vagrant/tmp/julia-packaging/osx10.7+/julia-master/base/sysimg.jl:7
>>>  in anonymous at multi.jl:1310
>>>  in run_work_thunk at multi.jl:621
>>>  in run_work_thunk at multi.jl:630
>>>  in anonymous at task.jl:6
>>> ERROR: X not defined
>>>  in anonymous at no file
>>>  in eval at
>>> /Users/vagrant/tmp/julia-packaging/osx10.7+/julia-master/base/sysimg.jl:7
>>>  in anonymous at multi.jl:1310
>>>  in anonymous at multi.jl:848
>>>  in run_work_thunk at multi.jl:621
>>>  in run_work_thunk at multi.jl:630
>>>  in anonymous at task.jl:6
>>> ERROR: 

Re: [julia-users] Problem with Domain Error in an aggregator function

2014-12-11 Thread Andreas Noack
The problem is the non-integer power of a negative number, so you'll have
to restrict the consumption to be positive.

2014-12-11 16:29 GMT-05:00 Pileas :

> Hello all,
>
> I have this function from which I want to make a 3D figure. I have some
> problems though, because I get a domain error. I am sure it must be a
> stupid mistake or something that I do not understand ... but I cannot
> figure out what I am doing wrong.
>
> So, the function is: C_M = ((c_1^(0.3/1.3) + c_2^(0.3/1.3))^(1.3/0.3)
>
> To my understanding, in order to make a 3D graph I need to keep one
> dimension constant while the other changes and gives values to C_M until
> all combinations in both directions have been exhausted (at least in the
> domain that I set). Eventually you get triplets of the form (c1, c2, cm).
>
> Here is the code:
>
> --
> *m = 100;# 100 points in each direction*
> *n = 100; **# 100 points in each direction*
>
> *c1 = randn(m);*
> *c2 = randn(n);*
> *cm = zeros(m*n);  # Cartesian product *
>
> *csvfile = open("graph.csv","w")*
> *write(csvfile,"c1,c2,cm, \n")*
>
> *for i = 1:m*
> *for j = 1:n*
> *   cm[] = (c1[i]^(0.3/1.3) + c2[j]^(0.3/1.3))^(1.3/0.3)*
> *end*
> *write(csvfile, join(graph,","), "\n")*
> *end*
>
> --
>
> P.S. I want to save the results and then use PyPlot to make the graph.
>
> Thank you for your time.
>


[julia-users] Problem with Domain Error in an aggregator function

2014-12-11 Thread Pileas
Hello all,

I have this function from which I want to make a 3D figure. I have some 
problems though, because I get a domain error. I am sure it must be a 
stupid mistake or something that I do not understand ... but I cannot 
figure out what I am doing wrong.

So, the function is: C_M = ((c_1^(0.3/1.3) + c_2^(0.3/1.3))^(1.3/0.3)

To my understanding, in order to make a 3D graph I need to keep one 
dimension constant while the other changes and gives values to C_M until 
all combinations in both directions have been exhausted (at least in the 
domain that I set). Eventually you get triplets of the form (c1, c2, cm).

Here is the code:

--
*m = 100;# 100 points in each direction*
*n = 100; **# 100 points in each direction*

*c1 = randn(m);*
*c2 = randn(n);*
*cm = zeros(m*n);  # Cartesian product *

*csvfile = open("graph.csv","w")*
*write(csvfile,"c1,c2,cm, \n")*

*for i = 1:m*
*for j = 1:n*
*   cm[] = (c1[i]^(0.3/1.3) + c2[j]^(0.3/1.3))^(1.3/0.3)*
*end*
*write(csvfile, join(graph,","), "\n")*
*end*

--

P.S. I want to save the results and then use PyPlot to make the graph.

Thank you for your time.


Re: [julia-users] Is there a function that performs (1:length(v))[v] for v a Vector{Bool} or a BitArray?

2014-12-11 Thread John Myles White
Does find() work?

 -- John

On Dec 11, 2014, at 4:19 PM, Douglas Bates  wrote:

> I realize it would be a one-liner to write one but, in the interests of not 
> reinventing the wheel, I wanted to ask if I had missed a function that does 
> this.  In R there is such a function called "which" but that name is already 
> taken in Julia for something else.



Re: [julia-users] Reviewing a Julia programming book for Packt

2014-12-11 Thread cdm

i will vote with my green paper and let the "market" decide ...

here is some competition in the book space:

LJTHW (aka: 'the g–ddamn book')
http://chrisvoncsefalvay.com/2014/12/11/A-change-of-seasons.html


awesome.

cdm


On Friday, December 5, 2014 9:23:29 AM UTC-8, Iain Dunning wrote:
>
>  

I don't think such a book should exist (yet). 





[julia-users] Is there a function that performs (1:length(v))[v] for v a Vector{Bool} or a BitArray?

2014-12-11 Thread Douglas Bates
I realize it would be a one-liner to write one but, in the interests of not 
reinventing the wheel, I wanted to ask if I had missed a function that does 
this.  In R there is such a function called "which" but that name is 
already taken in Julia for something else.


Re: [julia-users] Install Julia packages without internet

2014-12-11 Thread Pooja Khanna
That worked. Thank you so much!

Pooja

On Thursday, December 11, 2014 1:00:17 PM UTC-8, Stefan Karpinski wrote:
>
> I guess I would install the packages you need on another system with a 
> similar OS and setup and then copy over the julia install directory and the 
> ~/.julia directory.
>
> On Thu, Dec 11, 2014 at 3:30 PM, Pooja Khanna  > wrote:
>
>> I can copy stuff onto it but do not have physical access.
>>
>> On Thursday, December 11, 2014 12:26:16 PM UTC-8, Stefan Karpinski wrote:
>>>
>>> Does the machine have a USB drive or something?
>>>
>>> On Thu, Dec 11, 2014 at 3:16 PM, Pooja Khanna  
>>> wrote:
>>>
 Hello,

 I have no Internet connection on the server I am running Julia on.
 What would be the steps to install packages on this machine?

 Any pointers would be appreciated.

 Thanks,
 Pooja Khanna

>>>
>>>
>

Re: [julia-users] Re: EACCESS errors when starting julia 0.3.3

2014-12-11 Thread Stefan Karpinski
On Thu, Dec 11, 2014 at 4:05 PM, Robbin Bonthond 
wrote:

> for some reason the binary installer of julia uses restricted group
> permissions


That sounds like a potential problem – would you mind filing an issue?

https://github.com/JuliaLang/julia/issues


[julia-users] Re: EACCESS errors when starting julia 0.3.3

2014-12-11 Thread Robbin Bonthond
found it, for some reason the binary installer of julia uses restricted 
group permissions

$ ll -l /tool/julia/0.3.3/./etc
total 4.0K
dr-x--S--- 2 cad cad 4.0K Dec 11 14:56 julia

We fixed all directories to be accessible, and things are working now.

Robbin


On Thursday, December 11, 2014 2:28:53 PM UTC-6, Robbin Bonthond wrote:
>
> I'm trying to start julia 0.3.3 on a RHEL510 linux 64b machine. The tool 
> has been installed centrally via a NFS share where I don't have write 
> access to.
>
> $ /tool/julia/0.3.3/bin/julia
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "help()" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.3.3 (2014-11-23 20:19 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> |__/   |  x86_64-redhat-linux
>
> ERROR: stat: permission denied (EACCES)
>  in stat at ./stat.jl:43
>  in isfile at ./stat.jl:103
>  in load_juliarc at ./client.jl:322
>  in _start at ./client.jl:382
>  in _start_3B_1732 at /tool/julia/0.3.3/bin/../lib/julia/sys.so
>
> $ ls -l /tool/julia/0.3.3/bin/../lib/julia/sys.so
> -r-xr-xr-x 1 cad cad 6007276 Nov 30 14:57 
> /tool/julia/0.3.3/bin/../lib/julia/sys.so*
>
> Any suggestions?
>
> Robbin
>
>

Re: [julia-users] Install Julia packages without internet

2014-12-11 Thread Stefan Karpinski
I guess I would install the packages you need on another system with a
similar OS and setup and then copy over the julia install directory and the
~/.julia directory.

On Thu, Dec 11, 2014 at 3:30 PM, Pooja Khanna 
wrote:

> I can copy stuff onto it but do not have physical access.
>
> On Thursday, December 11, 2014 12:26:16 PM UTC-8, Stefan Karpinski wrote:
>>
>> Does the machine have a USB drive or something?
>>
>> On Thu, Dec 11, 2014 at 3:16 PM, Pooja Khanna 
>> wrote:
>>
>>> Hello,
>>>
>>> I have no Internet connection on the server I am running Julia on.
>>> What would be the steps to install packages on this machine?
>>>
>>> Any pointers would be appreciated.
>>>
>>> Thanks,
>>> Pooja Khanna
>>>
>>
>>


Re: [julia-users] Help with types (Arrays)

2014-12-11 Thread S


On Thursday, December 11, 2014 12:34:10 PM UTC-8, Isaiah wrote:
>
> I suggest to start with this recent thread, and there are some others if 
> you search for "covariance" on the list archives:
>
> https://groups.google.com/d/msg/julia-users/dEnCJ-nxAGE/1Bw4a1UvFCEJ
>
> (if it is still unclear how to do what you need after that, feel free to 
> ask)
>
>
>
>
>
Thank you. I'm sorry I didn't see this before posting. I'll have a look. 


Re: [julia-users] Help with types (Arrays)

2014-12-11 Thread Isaiah Norton
I suggest to start with this recent thread, and there are some others if
you search for "covariance" on the list archives:

https://groups.google.com/d/msg/julia-users/dEnCJ-nxAGE/1Bw4a1UvFCEJ

(if it is still unclear how to do what you need after that, feel free to
ask)






On Thu, Dec 11, 2014 at 3:26 PM, S  wrote:

>
>
> Hi all - very new to julia and am trying to wrap my head around multiple
> dispatch.
>
> I would think that because Int64 <: Integer, that an array of Int64 is an
> array of Integer. However, that doesn't appear to be the case.
>
> That is, with c = [1,2,3,4], I can't create a constructor function
> foo(myarr::Array{Integer,1}) - and Vector{Integer} doesn't work either.
>
> How do I create a constructor that takes a vector of Integers (of any
> subtype) - and only integers - and operates on the elements?
>
>
> julia> c = [1,2,3,4]
>
> julia> c
> 4-element Array{Int64,1}:
>  1
>  2
>  3
>  4
>
> julia> isa(c,Vector)
> true
>
> julia> isa(c,Vector{Integer})
> false
>
> julia> isa(c,Vector{Int64})
> true
>
> julia> isa(c,Array{Integer})
> false
>


Re: [julia-users] Install Julia packages without internet

2014-12-11 Thread Pooja Khanna
I can copy stuff onto it but do not have physical access.

On Thursday, December 11, 2014 12:26:16 PM UTC-8, Stefan Karpinski wrote:
>
> Does the machine have a USB drive or something?
>
> On Thu, Dec 11, 2014 at 3:16 PM, Pooja Khanna  > wrote:
>
>> Hello,
>>
>> I have no Internet connection on the server I am running Julia on.
>> What would be the steps to install packages on this machine?
>>
>> Any pointers would be appreciated.
>>
>> Thanks,
>> Pooja Khanna
>>
>
>

[julia-users] EACCESS errors when starting julia 0.3.3

2014-12-11 Thread Robbin Bonthond
I'm trying to start julia 0.3.3 on a RHEL510 linux 64b machine. The tool 
has been installed centrally via a NFS share where I don't have write 
access to.

$ /tool/julia/0.3.3/bin/julia
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.3 (2014-11-23 20:19 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/   |  x86_64-redhat-linux

ERROR: stat: permission denied (EACCES)
 in stat at ./stat.jl:43
 in isfile at ./stat.jl:103
 in load_juliarc at ./client.jl:322
 in _start at ./client.jl:382
 in _start_3B_1732 at /tool/julia/0.3.3/bin/../lib/julia/sys.so

$ ls -l /tool/julia/0.3.3/bin/../lib/julia/sys.so
-r-xr-xr-x 1 cad cad 6007276 Nov 30 14:57 
/tool/julia/0.3.3/bin/../lib/julia/sys.so*

Any suggestions?

Robbin



[julia-users] Re: MatrixDepot.jl: A Test Matrix Collection

2014-12-11 Thread cdm

thank you for your fine work in this area ...

do not be surprised if you inspired some
to take on some of these collections:

   http://math.nist.gov/MatrixMarket/

   http://www.cise.ufl.edu/research/sparse/matrices/


well done and good show !

best,

cdm



On Thursday, December 11, 2014 12:30:23 AM UTC-8, Weijian Zhang wrote:
>
> Hello,
>
> So far I have included 20 matrices in Matrix Depot. I just modified the 
> function matrixdepot() so it should display information nicely.
>
> The repository is here: https://github.com/weijianzhang/MatrixDepot.jl
>
> The documentation is here: 
> http://nbviewer.ipython.org/github/weijianzhang/MatrixDepot.jl/blob/master/doc/juliadoc.ipynb
>
> Let me know how you feel about it and if you have any questions.
>
> Thanks,
>
> Weijian
>
>
>
>
>

[julia-users] Help with types (Arrays)

2014-12-11 Thread S


Hi all - very new to julia and am trying to wrap my head around multiple 
dispatch.

I would think that because Int64 <: Integer, that an array of Int64 is an 
array of Integer. However, that doesn't appear to be the case.

That is, with c = [1,2,3,4], I can't create a constructor function 
foo(myarr::Array{Integer,1}) - and Vector{Integer} doesn't work either.

How do I create a constructor that takes a vector of Integers (of any 
subtype) - and only integers - and operates on the elements?


julia> c = [1,2,3,4]

julia> c
4-element Array{Int64,1}:
 1
 2
 3
 4

julia> isa(c,Vector)
true

julia> isa(c,Vector{Integer})
false

julia> isa(c,Vector{Int64})
true

julia> isa(c,Array{Integer})
false


Re: [julia-users] Install Julia packages without internet

2014-12-11 Thread Stefan Karpinski
Does the machine have a USB drive or something?

On Thu, Dec 11, 2014 at 3:16 PM, Pooja Khanna 
wrote:

> Hello,
>
> I have no Internet connection on the server I am running Julia on.
> What would be the steps to install packages on this machine?
>
> Any pointers would be appreciated.
>
> Thanks,
> Pooja Khanna
>


[julia-users] Install Julia packages without internet

2014-12-11 Thread Pooja Khanna
Hello,

I have no Internet connection on the server I am running Julia on.
What would be the steps to install packages on this machine?

Any pointers would be appreciated.

Thanks,
Pooja Khanna


Re: [julia-users] How save Dict{Any,Int64} ?

2014-12-11 Thread Paul Analyst

I now :
writecsv("dict.csv", dict)

d1=readcsv("dict.csv")
d=Dict(d1[:,1],d1[:,2])
or
d=Dict(d1[:,1],int(d1[:,2]) ) if second col is int

Paul

Is it a bug
save("o2a.jld","o2a",o2a)
?
if o2a is Dict

Paul

W dniu 2014-12-11 o 14:57, Daniel Høegh pisze:

You can serialize it see 
https://groups.google.com/forum/m/?fromgroups#!topic/julia-users/zN7OmKwnG40




Re: [julia-users] Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
I experimented with it a little bit before (mx innermost loop): does not 
make a difference.

On Thursday, December 11, 2014 9:55:46 AM UTC-8, Peter Simon wrote:
>
> One thing I noticed after a quick glance:  The ordering of your nested 
> loops is very cache-unfriendly.  Julia stores arrays in column-major order 
> (same as Fortran) so that nested loops should arrange that the first 
> subscripts of multidimensional arrays are varied most rapidly.
>
> --Peter
>
> On Thursday, December 11, 2014 9:47:33 AM UTC-8, Petr Krysl wrote:
>>
>> One more note: I conjectured that perhaps the compiler was not able to 
>> infer correctly the type of the matrices,  so I hardwired (in the actual FE 
>> code)
>>
>> Jac = 1.0; gradN = gradNparams[j]/(J); # get rid of Rm for the moment
>>
>> About 10% less memory used, runtime about the same.  So, no effect 
>> really. Loops are still slower than the vectorized code by a factor of two.
>>
>> Petr
>>
>>
>>

[julia-users] Re: Are there julia versions of dynamic time warping and peak finding in noisy data?

2014-12-11 Thread Joe Fowler
Hi g

You and I have discussed this privately and decided to work up a 
DynamicTimeWarp package in Julia ourselves, because we couldn't find one. 
It's not nearly ready for real-world use, I think, but it can be found at 
GitHub: https://github.com/joefowler/DynamicTimeWarp.jl

Our goals for the package include:

   1. Performing Dynamic Time Warping between 2 time series.
   2. Performing DTW with the solution path restricted to a specified 
   "window". This restriction speeds up the computation but can fail to find 
   the global optimum.
   3. The FastDTW algorithm (Salvador & Chan, 2007), which effectively 
   chooses a window by downsampling the original problem and running FastDTW 
   on that (or as a base case, running DTW once the down sampled problem is 
   small enough). Also faster but potentially misses the full DTW solution.
   4. DTW Barycenter Averaging (Petitjean, Ketterlin, and Gancarski, 
   _Pattern Recognition_ 44, 2011).  This algorithm aims to create a 
   "consensus sequence" iteratively from 2 or more sequences, using the 
   identification of samples that DTW generates between the latest consensus 
   and the constituent sequences.
   5. Tools for using DTW to align spectra. In our work, this would mean 
   calibration to unify uncalibrated energy spectra from x-ray spectrometers. 
   This is not a well-defined goal yet, but it's the reason that you and I 
   actually care about DTW.
   6. Demonstrations, documentation, good tests, and the usual things like 
   that.


Peak-finding, e.g. by continuous wavelet transforms or any other method, is 
a separate issue.

--Joe

On Wednesday, December 3, 2014 4:03:41 PM UTC-7, g wrote:
>
> Hello,
>
> I'm interested in using dynamic time warping and an algorithm for peak 
> finding in noisy data (like scipy.signal.find_peaks_cwt).  I'm just 
> wondering if there are any Julia implementations around, otherwise I'll 
> probably just use PyCall for now to use existing python code.
>
>
>
>
>

Re: [julia-users] Changes to array are not visible from caller

2014-12-11 Thread Tim Holy
On Thursday, December 11, 2014 09:51:32 AM Mark Stock wrote:
> Is there any way to update the array in-place without writing an explicit
> loop? I imagine that would be more efficient.

It's not. Other languages you may be used to call C, and their underlying C 
code uses...loops. Julia's loops are just as fast (and use SIMD vectorization, 
etc, when possible).

I'll repeat John's point that

   x[:, :] = RHS

updates x in-place, and on its own it is not at all wasteful. It's just a 
question of whether you need to allocate space for RHS---that's where the 
problem comes in.

--Tim


> 
> Mark
> 
> On Thursday, December 11, 2014 10:23:35 AM UTC-7, John Myles White wrote:
> > Nope.
> > 
> > You'll find Julia much easier to program in if you always replace x += y
> > with x = x + y before attempting to reason about performance. In this
> > case,
> > you'll
> > get
> > 
> > x[:, :] = x[:, :] + 1.0f - 5 * dxdt
> > 
> > In other words, you literally make a copy of the entire matrix x before
> > doing any useful work.
> > 
> >  -- John
> > 
> > On Dec 11, 2014, at 12:21 PM, Mark Stock  > > wrote:
> > 
> > The line now reads
> > 
> > x[:,:] += 1.0f-5*dxdt
> > 
> > And the result is now correct, but memory usage increased. Shouldn't it go
> > down if we're re-assigning to the same variable?
> > 
> > On Wednesday, December 10, 2014 11:57:28 PM UTC-7, Isaiah wrote:
> >> `x += ...` is equivalent to writing `x = x + ...` which rebinds the
> >> variable within that function. Instead, do an explicit array assignment
> >> `x[:,:] = ...`
> >> 
> >> This is discussed in the manual with a warning about type changes, but
> >> the implication for arrays should probably be made clear as well:
> >> http://julia.readthedocs.org/en/latest/manual/mathematical-operations/#up
> >> dating-operators
> >> 
> >> (there are some ongoing discussions about in-place array operators to
> >> improve the situation)
> >> 
> >> On Wed, Dec 10, 2014 at 7:44 PM, Mark Stock  wrote:
> >>> Hello, n00b Julia user here. I have two functions that change the values
> >>> of a passed-in array. One works (dxFromX), but for the other one
> >>> (eulerStep) the caller does not see any changes to the array. Why is
> >>> this?
> >>> 
> >>> function dxFromX!(x,dxdt)
> >>> 
> >>>   fill!(dxdt,0.0)
> >>>   
> >>>   for itarg = 1:size(x,1)
> >>>   
> >>> for isrc = 1:size(x,1)
> >>> 
> >>>   dx = x[isrc,1] - x[itarg,1]
> >>>   dy = x[isrc,2] - x[itarg,2]
> >>>   coeff = 1.0f0 / (dx^2 + dy^2 + 0.1f0)
> >>>   dxdt[itarg,1] -= coeff * dy
> >>>   dxdt[itarg,2] += coeff * dx
> >>> 
> >>> end
> >>>   
> >>>   end
> >>> 
> >>> end
> >>> 
> >>> function eulerStep!(x)
> >>> 
> >>>   dxdt = zeros(x)
> >>>   print ("\ndxdt before\n",dxdt[1:5,:],"\n")
> >>>   dxFromX!(x,dxdt)
> >>>   print ("\ndxdt after has changed\n",dxdt[1:5,:],"\n")
> >>>   x += 1.0f-5*dxdt
> >>>   print ("\nx inside\n",x[1:5,:],"\n")
> >>> 
> >>> end
> >>> 
> >>> x = float32(rand(1024,2))
> >>> print ("\nx before\n",x[1:5,:],"\n")
> >>> @time eulerStep!(x)
> >>> print ("\nx after is unchanged!\n",x[1:5,:],"\n")
> >>> 
> >>> I see the same behavior on 0.3.3 and 0.4.0, both release and debug
> >>> binaries, on OSX and Linux.



Re: [julia-users] Changes to array are not visible from caller

2014-12-11 Thread Isaiah Norton
>
> Is there any way to update the array in-place without writing an explicit
> loop? I imagine that would be more efficient.
>

Not yet - see the discussion in #249 (and many others linked from there):
https://github.com/JuliaLang/julia/issues/249


> Mark
>
> On Thursday, December 11, 2014 10:23:35 AM UTC-7, John Myles White wrote:
>>
>> Nope.
>>
>> You'll find Julia much easier to program in if you always replace x += y
>> with x = x + y before attempting to reason about performance. In this case,
>> you'll
>> get
>>
>> x[:, :] = x[:, :] + 1.0f - 5 * dxdt
>>
>> In other words, you literally make a copy of the entire matrix x before
>> doing any useful work.
>>
>>  -- John
>>
>> On Dec 11, 2014, at 12:21 PM, Mark Stock  wrote:
>>
>> The line now reads
>>
>> x[:,:] += 1.0f-5*dxdt
>>
>> And the result is now correct, but memory usage increased. Shouldn't it
>> go down if we're re-assigning to the same variable?
>>
>> On Wednesday, December 10, 2014 11:57:28 PM UTC-7, Isaiah wrote:
>>>
>>> `x += ...` is equivalent to writing `x = x + ...` which rebinds the
>>> variable within that function. Instead, do an explicit array assignment
>>> `x[:,:] = ...`
>>>
>>> This is discussed in the manual with a warning about type changes, but
>>> the implication for arrays should probably be made clear as well:
>>> http://julia.readthedocs.org/en/latest/manual/mathematical-
>>> operations/#updating-operators
>>>
>>> (there are some ongoing discussions about in-place array operators to
>>> improve the situation)
>>>
>>> On Wed, Dec 10, 2014 at 7:44 PM, Mark Stock  wrote:
>>>
 Hello, n00b Julia user here. I have two functions that change the
 values of a passed-in array. One works (dxFromX), but for the other one
 (eulerStep) the caller does not see any changes to the array. Why is this?

 function dxFromX!(x,dxdt)
   fill!(dxdt,0.0)

   for itarg = 1:size(x,1)
 for isrc = 1:size(x,1)
   dx = x[isrc,1] - x[itarg,1]
   dy = x[isrc,2] - x[itarg,2]
   coeff = 1.0f0 / (dx^2 + dy^2 + 0.1f0)
   dxdt[itarg,1] -= coeff * dy
   dxdt[itarg,2] += coeff * dx
 end
   end
 end

 function eulerStep!(x)
   dxdt = zeros(x)
   print ("\ndxdt before\n",dxdt[1:5,:],"\n")
   dxFromX!(x,dxdt)
   print ("\ndxdt after has changed\n",dxdt[1:5,:],"\n")
   x += 1.0f-5*dxdt
   print ("\nx inside\n",x[1:5,:],"\n")
 end

 x = float32(rand(1024,2))
 print ("\nx before\n",x[1:5,:],"\n")
 @time eulerStep!(x)
 print ("\nx after is unchanged!\n",x[1:5,:],"\n")

 I see the same behavior on 0.3.3 and 0.4.0, both release and debug
 binaries, on OSX and Linux.

>>>
>>>
>>


Re: [julia-users] Aren't loops supposed to be faster?

2014-12-11 Thread Peter Simon
One thing I noticed after a quick glance:  The ordering of your nested 
loops is very cache-unfriendly.  Julia stores arrays in column-major order 
(same as Fortran) so that nested loops should arrange that the first 
subscripts of multidimensional arrays are varied most rapidly.

--Peter

On Thursday, December 11, 2014 9:47:33 AM UTC-8, Petr Krysl wrote:
>
> One more note: I conjectured that perhaps the compiler was not able to 
> infer correctly the type of the matrices,  so I hardwired (in the actual FE 
> code)
>
> Jac = 1.0; gradN = gradNparams[j]/(J); # get rid of Rm for the moment
>
> About 10% less memory used, runtime about the same.  So, no effect really. 
> Loops are still slower than the vectorized code by a factor of two.
>
> Petr
>
>
>

Re: [julia-users] Changes to array are not visible from caller

2014-12-11 Thread Mark Stock
Is there any way to update the array in-place without writing an explicit 
loop? I imagine that would be more efficient.

Mark

On Thursday, December 11, 2014 10:23:35 AM UTC-7, John Myles White wrote:
>
> Nope.
>
> You'll find Julia much easier to program in if you always replace x += y 
> with x = x + y before attempting to reason about performance. In this case, 
> you'll
> get
>
> x[:, :] = x[:, :] + 1.0f - 5 * dxdt
>
> In other words, you literally make a copy of the entire matrix x before 
> doing any useful work.
>
>  -- John
>
> On Dec 11, 2014, at 12:21 PM, Mark Stock  > wrote:
>
> The line now reads
>
> x[:,:] += 1.0f-5*dxdt
>
> And the result is now correct, but memory usage increased. Shouldn't it go 
> down if we're re-assigning to the same variable?
>
> On Wednesday, December 10, 2014 11:57:28 PM UTC-7, Isaiah wrote:
>>
>> `x += ...` is equivalent to writing `x = x + ...` which rebinds the 
>> variable within that function. Instead, do an explicit array assignment 
>> `x[:,:] = ...`
>>
>> This is discussed in the manual with a warning about type changes, but 
>> the implication for arrays should probably be made clear as well: 
>> http://julia.readthedocs.org/en/latest/manual/mathematical-operations/#updating-operators
>>
>> (there are some ongoing discussions about in-place array operators to 
>> improve the situation)
>>
>> On Wed, Dec 10, 2014 at 7:44 PM, Mark Stock  wrote:
>>
>>> Hello, n00b Julia user here. I have two functions that change the values 
>>> of a passed-in array. One works (dxFromX), but for the other one 
>>> (eulerStep) the caller does not see any changes to the array. Why is this?
>>>
>>> function dxFromX!(x,dxdt)
>>>   fill!(dxdt,0.0)
>>>
>>>   for itarg = 1:size(x,1)
>>> for isrc = 1:size(x,1)
>>>   dx = x[isrc,1] - x[itarg,1]
>>>   dy = x[isrc,2] - x[itarg,2]
>>>   coeff = 1.0f0 / (dx^2 + dy^2 + 0.1f0)
>>>   dxdt[itarg,1] -= coeff * dy
>>>   dxdt[itarg,2] += coeff * dx
>>> end
>>>   end
>>> end
>>>
>>> function eulerStep!(x)
>>>   dxdt = zeros(x)
>>>   print ("\ndxdt before\n",dxdt[1:5,:],"\n")
>>>   dxFromX!(x,dxdt)
>>>   print ("\ndxdt after has changed\n",dxdt[1:5,:],"\n")
>>>   x += 1.0f-5*dxdt
>>>   print ("\nx inside\n",x[1:5,:],"\n")
>>> end
>>>
>>> x = float32(rand(1024,2))
>>> print ("\nx before\n",x[1:5,:],"\n")
>>> @time eulerStep!(x)
>>> print ("\nx after is unchanged!\n",x[1:5,:],"\n")
>>>
>>> I see the same behavior on 0.3.3 and 0.4.0, both release and debug 
>>> binaries, on OSX and Linux. 
>>>
>>
>>
>

Re: [julia-users] Roadmap

2014-12-11 Thread Stefan Karpinski
We might want to link to canned searches on GitHub to issues that are
relevant. For example, we do use milestones to categorize issues so we
could link to stable release issues and development release issues. That's
not quite a roadmap but it does help to give visitors some clue about
what's in the works without adding to the burden for developers (since
we're already using the milestones for organizational purposes).

On Thu, Dec 11, 2014 at 10:50 AM, John Myles White  wrote:

> This is a very good point. I'd label this as something like "core unsolved
> challenges". Julia #265 (https://github.com/JuliaLang/julia/issues/265)
> comes to mind.
>
> In general, a list of the "big" issues would be much easier to maintain
> than a list of goals for the future. We could just use a tag like "core" on
> the issue tracker.
>
>  -- John
>
> On Dec 11, 2014, at 4:49 AM, Mike Innes  wrote:
>
> It seems to me that a lot of FAQs could be answered by a simple list of
> the communities'/core developers' priorities. For example:
>
> We care about module load times and static compilation, so that's going to
> happen eventually. We care about package documentation, which is basically
> done. We don't care as much about deterministic memory management or TCO,
> so neither of those things are happening any time soon.
>
> It doesn't have to be a commitment to releases or dates, or even be
> particularly detailed, to give a good sense of where Julia is headed from a
> user perspective.
>
> Indeed, it's only the same things you end up posting on HN every time
> someone complains that Gadfly is slow.
>
> On 11 December 2014 at 03:01, Tim Holy  wrote:
>
>> Really nice summaries, John and Tony.
>>
>> On Thursday, December 11, 2014 02:08:54 AM Boylan, Ross wrote:
>> > BTW, is 0.4 still in a "you don't want to go there" state for users of
>> > julia?
>>
>> In short, yes---for most users I'd personally recommend sticking with 0.3.
>> Unless you simply _must_ have some of its lovely new features. But be
>> prepared
>> to update your code basically every week or so to deal with changes.
>>
>> --Tim
>>
>>
>
>


Re: [julia-users] Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
One more note: I conjectured that perhaps the compiler was not able to 
infer correctly the type of the matrices,  so I hardwired (in the actual FE 
code)

Jac = 1.0; gradN = gradNparams[j]/(J); # get rid of Rm for the moment

About 10% less memory used, runtime about the same.  So, no effect really. 
Loops are still slower than the vectorized code by a factor of two.

Petr




Re: [julia-users] Aren't loops supposed to be faster?

2014-12-11 Thread Petr Krysl
Dear Andreas,

Thank you very much. True, I have not noticed that. I put the definitions 
of the arrays outside of the two functions so that their results could be 
compared.

What I'm trying to do here is write a simple chunk of code that would 
reproduce the conditions in the FE package.
There the vectorized code and the loops only see local variables, declared 
above the major loop.  So in my opinion the conditions then are the same as 
in the corrected fragment from the gist (only local variables).

Now I can see that the fragment for some reason did not reproduce the 
conditions from the full code.  Indeed, as you predicted the loop 
implementation is almost 10 times faster than the vectorized version. 
 However, in the FE code the loops run twice as slow and consume more 
memory.

Just in case you, Andreas, or anyone else are curious,  here is the full FE 
code that displays the weird behavior of loops being slower than vectorized 
code.
https://gist.github.com/PetrKryslUCSD/ae4a0f218fe50abe370f

Thanks again,

Petr

On Thursday, December 11, 2014 9:02:00 AM UTC-8, Andreas Noack wrote:
>
> See the comment in the gist.
>
> 2014-12-11 11:47 GMT-05:00 Petr Krysl >:
>
>> Acting upon the advice that replacing matrix-matrix multiplications in 
>> vectorized form with loops would help with performance, I chopped out a 
>> piece of code from my finite element solver (
>> https://gist.github.com/anonymous/4ec426096c02faa4354d) and ran some 
>> tests with the following results:
>>
>> Vectorized code:
>> elapsed time: 0.326802682 seconds (134490340 bytes allocated, 17.06% gc 
>> time)
>>
>> Loops code:
>> elapsed time: 4.681451441 seconds (997454276 bytes allocated, 9.05% gc 
>> time) 
>>
>> SLOWER and using MORE memory?!
>>
>> I must be doing something terribly wrong.
>>
>> Petr
>>
>>
>

Re: [julia-users] Changes to array are not visible from caller

2014-12-11 Thread Tim Holy
On Thursday, December 11, 2014 09:14:32 AM Mark Stock wrote:
> Wow. I thought I read the docs thoroughly, and I didn't see anything about
> how += (without the [] operator!) rebinds. That's a surprising behavior to
> many scientific programmers, and should be better documented.

Please, add that documentation. (Go to 
https://github.com/JuliaLang/julia/tree/master/doc, find the file you need to 
edit, and click on the little pencil icon.)

--Tim

> 
> Thank you for the quick reply!
> 
> On Wednesday, December 10, 2014 11:57:28 PM UTC-7, Isaiah wrote:
> > `x += ...` is equivalent to writing `x = x + ...` which rebinds the
> > variable within that function. Instead, do an explicit array assignment
> > `x[:,:] = ...`
> > 
> > This is discussed in the manual with a warning about type changes, but the
> > implication for arrays should probably be made clear as well:
> > http://julia.readthedocs.org/en/latest/manual/mathematical-operations/#upd
> > ating-operators
> > 
> > (there are some ongoing discussions about in-place array operators to
> > improve the situation)
> > 
> > On Wed, Dec 10, 2014 at 7:44 PM, Mark Stock  > 
> > > wrote:
> >> Hello, n00b Julia user here. I have two functions that change the values
> >> of a passed-in array. One works (dxFromX), but for the other one
> >> (eulerStep) the caller does not see any changes to the array. Why is
> >> this?
> >> 
> >> function dxFromX!(x,dxdt)
> >> 
> >>   fill!(dxdt,0.0)
> >>   
> >>   for itarg = 1:size(x,1)
> >>   
> >> for isrc = 1:size(x,1)
> >> 
> >>   dx = x[isrc,1] - x[itarg,1]
> >>   dy = x[isrc,2] - x[itarg,2]
> >>   coeff = 1.0f0 / (dx^2 + dy^2 + 0.1f0)
> >>   dxdt[itarg,1] -= coeff * dy
> >>   dxdt[itarg,2] += coeff * dx
> >> 
> >> end
> >>   
> >>   end
> >> 
> >> end
> >> 
> >> function eulerStep!(x)
> >> 
> >>   dxdt = zeros(x)
> >>   print ("\ndxdt before\n",dxdt[1:5,:],"\n")
> >>   dxFromX!(x,dxdt)
> >>   print ("\ndxdt after has changed\n",dxdt[1:5,:],"\n")
> >>   x += 1.0f-5*dxdt
> >>   print ("\nx inside\n",x[1:5,:],"\n")
> >> 
> >> end
> >> 
> >> x = float32(rand(1024,2))
> >> print ("\nx before\n",x[1:5,:],"\n")
> >> @time eulerStep!(x)
> >> print ("\nx after is unchanged!\n",x[1:5,:],"\n")
> >> 
> >> I see the same behavior on 0.3.3 and 0.4.0, both release and debug
> >> binaries, on OSX and Linux.



Re: [julia-users] Changes to array are not visible from caller

2014-12-11 Thread John Myles White
Nope.

You'll find Julia much easier to program in if you always replace x += y with x 
= x + y before attempting to reason about performance. In this case, you'll
get

x[:, :] = x[:, :] + 1.0f - 5 * dxdt

In other words, you literally make a copy of the entire matrix x before doing 
any useful work.

 -- John

On Dec 11, 2014, at 12:21 PM, Mark Stock  wrote:

> The line now reads
> 
> x[:,:] += 1.0f-5*dxdt
> 
> And the result is now correct, but memory usage increased. Shouldn't it go 
> down if we're re-assigning to the same variable?
> 
> On Wednesday, December 10, 2014 11:57:28 PM UTC-7, Isaiah wrote:
> `x += ...` is equivalent to writing `x = x + ...` which rebinds the variable 
> within that function. Instead, do an explicit array assignment `x[:,:] = ...`
> 
> This is discussed in the manual with a warning about type changes, but the 
> implication for arrays should probably be made clear as well: 
> http://julia.readthedocs.org/en/latest/manual/mathematical-operations/#updating-operators
> 
> (there are some ongoing discussions about in-place array operators to improve 
> the situation)
> 
> On Wed, Dec 10, 2014 at 7:44 PM, Mark Stock  wrote:
> Hello, n00b Julia user here. I have two functions that change the values of a 
> passed-in array. One works (dxFromX), but for the other one (eulerStep) the 
> caller does not see any changes to the array. Why is this?
> 
> function dxFromX!(x,dxdt)
>   fill!(dxdt,0.0)
> 
>   for itarg = 1:size(x,1)
> for isrc = 1:size(x,1)
>   dx = x[isrc,1] - x[itarg,1]
>   dy = x[isrc,2] - x[itarg,2]
>   coeff = 1.0f0 / (dx^2 + dy^2 + 0.1f0)
>   dxdt[itarg,1] -= coeff * dy
>   dxdt[itarg,2] += coeff * dx
> end
>   end
> end
> 
> function eulerStep!(x)
>   dxdt = zeros(x)
>   print ("\ndxdt before\n",dxdt[1:5,:],"\n")
>   dxFromX!(x,dxdt)
>   print ("\ndxdt after has changed\n",dxdt[1:5,:],"\n")
>   x += 1.0f-5*dxdt
>   print ("\nx inside\n",x[1:5,:],"\n")
> end
> 
> x = float32(rand(1024,2))
> print ("\nx before\n",x[1:5,:],"\n")
> @time eulerStep!(x)
> print ("\nx after is unchanged!\n",x[1:5,:],"\n")
> 
> I see the same behavior on 0.3.3 and 0.4.0, both release and debug binaries, 
> on OSX and Linux. 
> 



Re: [julia-users] Changes to array are not visible from caller

2014-12-11 Thread Mark Stock
The line now reads

x[:,:] += 1.0f-5*dxdt

And the result is now correct, but memory usage increased. Shouldn't it go 
down if we're re-assigning to the same variable?

On Wednesday, December 10, 2014 11:57:28 PM UTC-7, Isaiah wrote:
>
> `x += ...` is equivalent to writing `x = x + ...` which rebinds the 
> variable within that function. Instead, do an explicit array assignment 
> `x[:,:] = ...`
>
> This is discussed in the manual with a warning about type changes, but the 
> implication for arrays should probably be made clear as well: 
> http://julia.readthedocs.org/en/latest/manual/mathematical-operations/#updating-operators
>
> (there are some ongoing discussions about in-place array operators to 
> improve the situation)
>
> On Wed, Dec 10, 2014 at 7:44 PM, Mark Stock  > wrote:
>
>> Hello, n00b Julia user here. I have two functions that change the values 
>> of a passed-in array. One works (dxFromX), but for the other one 
>> (eulerStep) the caller does not see any changes to the array. Why is this?
>>
>> function dxFromX!(x,dxdt)
>>   fill!(dxdt,0.0)
>>
>>   for itarg = 1:size(x,1)
>> for isrc = 1:size(x,1)
>>   dx = x[isrc,1] - x[itarg,1]
>>   dy = x[isrc,2] - x[itarg,2]
>>   coeff = 1.0f0 / (dx^2 + dy^2 + 0.1f0)
>>   dxdt[itarg,1] -= coeff * dy
>>   dxdt[itarg,2] += coeff * dx
>> end
>>   end
>> end
>>
>> function eulerStep!(x)
>>   dxdt = zeros(x)
>>   print ("\ndxdt before\n",dxdt[1:5,:],"\n")
>>   dxFromX!(x,dxdt)
>>   print ("\ndxdt after has changed\n",dxdt[1:5,:],"\n")
>>   x += 1.0f-5*dxdt
>>   print ("\nx inside\n",x[1:5,:],"\n")
>> end
>>
>> x = float32(rand(1024,2))
>> print ("\nx before\n",x[1:5,:],"\n")
>> @time eulerStep!(x)
>> print ("\nx after is unchanged!\n",x[1:5,:],"\n")
>>
>> I see the same behavior on 0.3.3 and 0.4.0, both release and debug 
>> binaries, on OSX and Linux. 
>>
>
>

Re: [julia-users] Changes to array are not visible from caller

2014-12-11 Thread Mark Stock
Wow. I thought I read the docs thoroughly, and I didn't see anything about 
how += (without the [] operator!) rebinds. That's a surprising behavior to 
many scientific programmers, and should be better documented.

Thank you for the quick reply!

On Wednesday, December 10, 2014 11:57:28 PM UTC-7, Isaiah wrote:
>
> `x += ...` is equivalent to writing `x = x + ...` which rebinds the 
> variable within that function. Instead, do an explicit array assignment 
> `x[:,:] = ...`
>
> This is discussed in the manual with a warning about type changes, but the 
> implication for arrays should probably be made clear as well: 
> http://julia.readthedocs.org/en/latest/manual/mathematical-operations/#updating-operators
>
> (there are some ongoing discussions about in-place array operators to 
> improve the situation)
>
> On Wed, Dec 10, 2014 at 7:44 PM, Mark Stock  > wrote:
>
>> Hello, n00b Julia user here. I have two functions that change the values 
>> of a passed-in array. One works (dxFromX), but for the other one 
>> (eulerStep) the caller does not see any changes to the array. Why is this?
>>
>> function dxFromX!(x,dxdt)
>>   fill!(dxdt,0.0)
>>
>>   for itarg = 1:size(x,1)
>> for isrc = 1:size(x,1)
>>   dx = x[isrc,1] - x[itarg,1]
>>   dy = x[isrc,2] - x[itarg,2]
>>   coeff = 1.0f0 / (dx^2 + dy^2 + 0.1f0)
>>   dxdt[itarg,1] -= coeff * dy
>>   dxdt[itarg,2] += coeff * dx
>> end
>>   end
>> end
>>
>> function eulerStep!(x)
>>   dxdt = zeros(x)
>>   print ("\ndxdt before\n",dxdt[1:5,:],"\n")
>>   dxFromX!(x,dxdt)
>>   print ("\ndxdt after has changed\n",dxdt[1:5,:],"\n")
>>   x += 1.0f-5*dxdt
>>   print ("\nx inside\n",x[1:5,:],"\n")
>> end
>>
>> x = float32(rand(1024,2))
>> print ("\nx before\n",x[1:5,:],"\n")
>> @time eulerStep!(x)
>> print ("\nx after is unchanged!\n",x[1:5,:],"\n")
>>
>> I see the same behavior on 0.3.3 and 0.4.0, both release and debug 
>> binaries, on OSX and Linux. 
>>
>
>

Re: [julia-users] Unexpected append! behavior

2014-12-11 Thread Mike Innes
Happy to help

On 11 December 2014 at 17:04, Sean McBane  wrote:

> Ah, I see the error in my thinking now. Knowing what the ! signifies makes
> it make a lot more sense.
>
> Thanks guys for putting up with a newbie. This was probably one of those 1
> in the morning questions that I should have waited to look at the next day
> before asking for help; it seems obvious now.
>
> -- Sean
>
> On Thursday, December 11, 2014 3:26:12 AM UTC-6, Mike Innes wrote:
>>
>> Think of append!(X, Y) as equivalent to X = vcat(X, Y). You called
>> append! twice, so X gets Y appended twice.
>>
>> julia> X = [1,2]; Y = [3,4];
>>
>> julia> X = vcat(X,Y)
>> [1, 2, 3, 4]
>>
>> In your example you went ahead and did this again:
>>
>> julia> X = (X = vcat(X, Y))
>> [1, 2, 3, 4, 3, 4]
>>
>> But if you reset X, Y via the first statement and *then* call X =
>> append!(X, Y), it works as you would expect.
>>
>> julia> X = [1,2]; Y = [3,4];
>>
>> julia> X = append!(X, Y) # same as X = (X = vcat(X, Y))
>> [1, 2, 3, 4]
>>
>> On 11 December 2014 at 07:51, Alex Ames  wrote:
>>
>>> Functions that end with an exclamation point modify their arguments, but
>>> they can return values just like any other function. For example:
>>>
>>> julia> x = [1,2]; y = [3, 4]
>>> 2-element Array{Int64,1}:
>>>  3
>>>  4
>>>
>>> julia> append!(x,y)
>>> 4-element Array{Int64,1}:
>>>  1
>>>  2
>>>  3
>>>  4
>>>
>>> julia> z = append!(x,y)
>>> 6-element Array{Int64,1}:
>>>  1
>>>  2
>>>  3
>>>  4
>>>  3
>>>  4
>>>
>>> julia> z
>>> 6-element Array{Int64,1}:
>>>  1
>>>  2
>>>  3
>>>  4
>>>  3
>>>  4
>>>
>>> julia> x
>>> 6-element Array{Int64,1}:
>>>  1
>>>  2
>>>  3
>>>  4
>>>  3
>>>  4
>>>
>>> The append! function takes two arrays, appends the second to the first,
>>> then returns the values now contained by the first array. No recursion
>>> craziness required.
>>>
>>> On Thursday, December 11, 2014 1:11:50 AM UTC-6, Sean McBane wrote:

 Ivar is correct; I was running in the Windows command prompt and
 couldn't copy and paste so I copied it by hand and made an error.

 Ok, so I understand that append!(X,Y) is modifying X in place. But I
 still do not get where the output for the second case, where the result of
 append!(X,Y) is assigned back into X is what it is. It would make sense to
 me if this resulted in a recursion with Y forever getting appended to X,
 but as it is I don't understand.

 Thanks.

 -- Sean

 On Thursday, December 11, 2014 12:42:45 AM UTC-6, Ivar Nesje wrote:
>
> I assume the first line should be
>
> > X = [1,2]; Y = [3,4];
>
> Then the results you get makes sense. The thing is that julia has
> mutable arrays, and the ! at the end of append! indicates that it is a
> function that mutates it's argument.


>>


Re: [julia-users] Unexpected append! behavior

2014-12-11 Thread Sean McBane
Ah, I see the error in my thinking now. Knowing what the ! signifies makes 
it make a lot more sense.

Thanks guys for putting up with a newbie. This was probably one of those 1 
in the morning questions that I should have waited to look at the next day 
before asking for help; it seems obvious now.

-- Sean

On Thursday, December 11, 2014 3:26:12 AM UTC-6, Mike Innes wrote:
>
> Think of append!(X, Y) as equivalent to X = vcat(X, Y). You called append! 
> twice, so X gets Y appended twice.
>
> julia> X = [1,2]; Y = [3,4];
>
> julia> X = vcat(X,Y)
> [1, 2, 3, 4]
>
> In your example you went ahead and did this again:
>
> julia> X = (X = vcat(X, Y))
> [1, 2, 3, 4, 3, 4]
>
> But if you reset X, Y via the first statement and *then* call X = 
> append!(X, Y), it works as you would expect.
>
> julia> X = [1,2]; Y = [3,4];
>
> julia> X = append!(X, Y) # same as X = (X = vcat(X, Y))
> [1, 2, 3, 4]
>
> On 11 December 2014 at 07:51, Alex Ames  > wrote:
>
>> Functions that end with an exclamation point modify their arguments, but 
>> they can return values just like any other function. For example:
>>
>> julia> x = [1,2]; y = [3, 4]
>> 2-element Array{Int64,1}:
>>  3
>>  4
>>
>> julia> append!(x,y)
>> 4-element Array{Int64,1}:
>>  1
>>  2
>>  3
>>  4
>>
>> julia> z = append!(x,y)
>> 6-element Array{Int64,1}:
>>  1
>>  2
>>  3
>>  4
>>  3
>>  4
>>
>> julia> z
>> 6-element Array{Int64,1}:
>>  1
>>  2
>>  3
>>  4
>>  3
>>  4
>>
>> julia> x
>> 6-element Array{Int64,1}:
>>  1
>>  2
>>  3
>>  4
>>  3
>>  4
>>
>> The append! function takes two arrays, appends the second to the first, 
>> then returns the values now contained by the first array. No recursion 
>> craziness required.
>>
>> On Thursday, December 11, 2014 1:11:50 AM UTC-6, Sean McBane wrote:
>>>
>>> Ivar is correct; I was running in the Windows command prompt and 
>>> couldn't copy and paste so I copied it by hand and made an error.
>>>
>>> Ok, so I understand that append!(X,Y) is modifying X in place. But I 
>>> still do not get where the output for the second case, where the result of 
>>> append!(X,Y) is assigned back into X is what it is. It would make sense to 
>>> me if this resulted in a recursion with Y forever getting appended to X, 
>>> but as it is I don't understand.
>>>
>>> Thanks.
>>>
>>> -- Sean
>>>
>>> On Thursday, December 11, 2014 12:42:45 AM UTC-6, Ivar Nesje wrote:

 I assume the first line should be 

 > X = [1,2]; Y = [3,4]; 

 Then the results you get makes sense. The thing is that julia has 
 mutable arrays, and the ! at the end of append! indicates that it is a 
 function that mutates it's argument.
>>>
>>>
>

  1   2   >