[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Chris Rackauckas
It really depends on what he means by "large" and "sparse". There is no 
indication from the OP that he specifically is choosing a direct over an 
iterative method, just that he knows \ is the go-to for solving Ax=b that 
he tried. It should be mentioned that direct solvers are O(n^3) and 
factorizations are not necessarily sparse, and so depending on what is 
meant by those terms, an algorithm which uses a factorization solver may 
not be tenable.

Another point is that he likely shouldn't be using Bigs for this. He should 
likely try ArbFloats or DoubleDoubles to make the computation faster.  

On Wednesday, August 10, 2016 at 9:14:56 PM UTC-7, Ralph Smith wrote:
>
> The OP wants extremely high precision and indicated that he was willing to 
> factor the matrix.  I recommended iterative refinement which converges very 
> quickly, and exploits the state-of-the-art direct solvers.  The solvers in 
> IterativeSolvers.jl are for a different domain, where the matrix is too 
> large or expensive to factor.  To get high accuracy with them generally 
> requires tailored preconditioners, which are not "out of the box". In fact 
> one usually need a preconditioner to get any convergence with the 
> non-symmetric ones for interesting ranks.  (I've been struggling for months 
> to find a good preconditioner for an application of GMRES in my work, so 
> this is a sore point.)
>
> On Wednesday, August 10, 2016 at 10:56:22 PM UTC-4, Chris Rackauckas wrote:
>>
>> Yes textbook answer is, why do you want to use `\`? Iterative techniques 
>> are likely better suited for the problem. There's no need to roll you own, 
>> the package IterativeSolvers.jl has a good number of techniques implemented 
>> which are well-suited for the problem since A is a large sparse matrix. 
>> Their methods should work out of the box with Bigs, though you will likely 
>> want to adjust the tolerances.
>>
>> On Wednesday, August 10, 2016 at 7:37:35 PM UTC-7, Ralph Smith wrote:
>>>
>>> Here is a textbook answer.  Appropriate choice of n depends on condition 
>>> of A.
>>>
>>> """
>>>
>>> iterimprove(A,b,n=1,verbose=true)
>>>
>>>  
>>>
>>> Solve `A x = b` for `x` using iterative improvement 
>>>
>>> """ 
>>>
>>> function iterimprove{T<:AbstractFloat}(A::SparseMatrixCSC{T},
b::Vector{T},n=1,verbose=true)

>>>  eps(T) < eps(Float64) || throw(ArgumentError("wrong 
 implementation")) 
>>>
>>>  A0 = SparseMatrixCSC{Float64}(A)
 F = factorize(A0)
 x = zeros(b)
 r = copy(b)
 for iter = 1:n+1
 y = F \ Vector{Float64}(r)
 for i in eachindex(x)
 x[i] += y[i]
 end
 r = b - A * x
 if verbose
 @printf "at iter %d resnorm = %.3g\n" iter norm(r)
 end
 end
 x
 end
>>>
>>>
>>>
>>> On Wednesday, August 10, 2016 at 3:47:10 PM UTC-4, Nicklas Andersen 
>>> wrote:

 Hello

 I'm trying to solve a large, sparse and unsymmetrical linear system Ax 
 = b.
 For this task I'm using Julias *SparseMatrixCSC *type for the 
 definition of my matrices and Julias built in backslash ' \ ' operator for 
 the solution of the system.
 I need *quadruple precision* and thus I've been trying to implement my 
 routine with the *BigFloat *type together with the SparseMatrixCSC 
 type.

 To illustrate this, I give a simple example here:
 set_bigfloat_precision(128);
 A  = speye(BigFloat, 2, 2);
 b = ones(BigFloat, 2, 1);
 x = A\b;

 If I do this I either get a StackOverFlow error:
 ERROR: StackOverflowError:
  in copy at array.jl:100
  in float at sparse/sparsematrix.jl:234
  in call at essentials.jl:57 (repeats 254 times)

 or the solver seems to run forever and never terminates. As the second 
 error indicates it seems like the sparse solver only accepts the normal 
 *float* types.
 My question is then, is there a way to get quadruple precision with the 
 standard solvers in Julia, in this case UMFpack I assume ? or should I 
 look 
 for something else (in this case any suggestions :) ) ?

 Regards Nicklas A.



[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Ralph Smith
The OP wants extremely high precision and indicated that he was willing to 
factor the matrix.  I recommended iterative refinement which converges very 
quickly, and exploits the state-of-the-art direct solvers.  The solvers in 
IterativeSolvers.jl are for a different domain, where the matrix is too 
large or expensive to factor.  To get high accuracy with them generally 
requires tailored preconditioners, which are not "out of the box". In fact 
one usually need a preconditioner to get any convergence with the 
non-symmetric ones for interesting ranks.  (I've been struggling for months 
to find a good preconditioner for an application of GMRES in my work, so 
this is a sore point.)

On Wednesday, August 10, 2016 at 10:56:22 PM UTC-4, Chris Rackauckas wrote:
>
> Yes textbook answer is, why do you want to use `\`? Iterative techniques 
> are likely better suited for the problem. There's no need to roll you own, 
> the package IterativeSolvers.jl has a good number of techniques implemented 
> which are well-suited for the problem since A is a large sparse matrix. 
> Their methods should work out of the box with Bigs, though you will likely 
> want to adjust the tolerances.
>
> On Wednesday, August 10, 2016 at 7:37:35 PM UTC-7, Ralph Smith wrote:
>>
>> Here is a textbook answer.  Appropriate choice of n depends on condition 
>> of A.
>>
>> """
>>
>> iterimprove(A,b,n=1,verbose=true)
>>
>>  
>>
>> Solve `A x = b` for `x` using iterative improvement 
>>
>> """ 
>>
>> function iterimprove{T<:AbstractFloat}(A::SparseMatrixCSC{T},
>>>b::Vector{T},n=1,verbose=true)
>>>
>>  eps(T) < eps(Float64) || throw(ArgumentError("wrong 
>>> implementation")) 
>>
>>  A0 = SparseMatrixCSC{Float64}(A)
>>> F = factorize(A0)
>>> x = zeros(b)
>>> r = copy(b)
>>> for iter = 1:n+1
>>> y = F \ Vector{Float64}(r)
>>> for i in eachindex(x)
>>> x[i] += y[i]
>>> end
>>> r = b - A * x
>>> if verbose
>>> @printf "at iter %d resnorm = %.3g\n" iter norm(r)
>>> end
>>> end
>>> x
>>> end
>>
>>
>>
>> On Wednesday, August 10, 2016 at 3:47:10 PM UTC-4, Nicklas Andersen wrote:
>>>
>>> Hello
>>>
>>> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = 
>>> b.
>>> For this task I'm using Julias *SparseMatrixCSC *type for the 
>>> definition of my matrices and Julias built in backslash ' \ ' operator for 
>>> the solution of the system.
>>> I need *quadruple precision* and thus I've been trying to implement my 
>>> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>>>
>>> To illustrate this, I give a simple example here:
>>> set_bigfloat_precision(128);
>>> A  = speye(BigFloat, 2, 2);
>>> b = ones(BigFloat, 2, 1);
>>> x = A\b;
>>>
>>> If I do this I either get a StackOverFlow error:
>>> ERROR: StackOverflowError:
>>>  in copy at array.jl:100
>>>  in float at sparse/sparsematrix.jl:234
>>>  in call at essentials.jl:57 (repeats 254 times)
>>>
>>> or the solver seems to run forever and never terminates. As the second 
>>> error indicates it seems like the sparse solver only accepts the normal 
>>> *float* types.
>>> My question is then, is there a way to get quadruple precision with the 
>>> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
>>> for something else (in this case any suggestions :) ) ?
>>>
>>> Regards Nicklas A.
>>>
>>>

[julia-users] Re: BigInt / BigFloat / Bitshift

2016-08-10 Thread David P. Sanders


El miércoles, 10 de agosto de 2016, 18:20:11 (UTC-4), digxx escribió:
>
> As far as I understand Julia adjusts the bitsize of an integer when I use 
> BigInt.
> For example n=big(10)^1 is as many bits as needed.
> Now 2*n is still an integer though dividing makes a bigfloat out of it. 
> How big is the bigfloat? does it "resolve" up to the last integer? Does it 
> increase with size like bigint would?
>

When you create a BigFloat, it has the current default precision. You can 
ask Julia this kind of thing directly:

julia> a = big(10)^1;

julia> a / 3
3.63e+

julia> precision(ans)
256

julia> set_bigfloat_precision(1000)
1000

julia> a / 3
3.32e+

julia> precision(ans)
1000

(That was Julia 0.4. In Julia 0.5, use setprecision instead of 
set_bigfloat_precision.)

Please read the manual:

http://docs.julialang.org/en/release-0.4/manual/integers-and-floating-point-numbers/#arbitrary-precision-arithmetic




 

> for example (n/2)^10 is still legid up to the last digit (neglecting the 
> fact that these would be zero in this case anyway)
> When dividing n by an integer and the result is an an integer, how would I 
> achieve this?
> Is there already a simple way?
> For example I could represent n in some basis b and look where the right 
> most digit is zero and then bitshift, but the bitshift I found in the 
> manual mainly << >> .>> etc.. do not work or I'm too stupid
> how am I supposed to use these?
> 3 << ?
>


[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Chris Rackauckas
Yes textbook answer is, why do you want to use `\`? Iterative techniques 
are likely better suited for the problem. There's no need to roll you own, 
the package IterativeSolvers.jl has a good number of techniques implemented 
which are well-suited for the problem since A is a large sparse matrix. 
Their methods should work out of the box with Bigs, though you will likely 
want to adjust the tolerances.

On Wednesday, August 10, 2016 at 7:37:35 PM UTC-7, Ralph Smith wrote:
>
> Here is a textbook answer.  Appropriate choice of n depends on condition 
> of A.
>
> """
>
> iterimprove(A,b,n=1,verbose=true)
>
>  
>
> Solve `A x = b` for `x` using iterative improvement 
>
> """ 
>
> function iterimprove{T<:AbstractFloat}(A::SparseMatrixCSC{T},
>>b::Vector{T},n=1,verbose=true)
>>
>  eps(T) < eps(Float64) || throw(ArgumentError("wrong implementation")) 
>
>  A0 = SparseMatrixCSC{Float64}(A)
>> F = factorize(A0)
>> x = zeros(b)
>> r = copy(b)
>> for iter = 1:n+1
>> y = F \ Vector{Float64}(r)
>> for i in eachindex(x)
>> x[i] += y[i]
>> end
>> r = b - A * x
>> if verbose
>> @printf "at iter %d resnorm = %.3g\n" iter norm(r)
>> end
>> end
>> x
>> end
>
>
>
> On Wednesday, August 10, 2016 at 3:47:10 PM UTC-4, Nicklas Andersen wrote:
>>
>> Hello
>>
>> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = 
>> b.
>> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
>> of my matrices and Julias built in backslash ' \ ' operator for the 
>> solution of the system.
>> I need *quadruple precision* and thus I've been trying to implement my 
>> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>>
>> To illustrate this, I give a simple example here:
>> set_bigfloat_precision(128);
>> A  = speye(BigFloat, 2, 2);
>> b = ones(BigFloat, 2, 1);
>> x = A\b;
>>
>> If I do this I either get a StackOverFlow error:
>> ERROR: StackOverflowError:
>>  in copy at array.jl:100
>>  in float at sparse/sparsematrix.jl:234
>>  in call at essentials.jl:57 (repeats 254 times)
>>
>> or the solver seems to run forever and never terminates. As the second 
>> error indicates it seems like the sparse solver only accepts the normal 
>> *float* types.
>> My question is then, is there a way to get quadruple precision with the 
>> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
>> for something else (in this case any suggestions :) ) ?
>>
>> Regards Nicklas A.
>>
>>

[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Ralph Smith
Here is a textbook answer.  Appropriate choice of n depends on condition of 
A.

"""

iterimprove(A,b,n=1,verbose=true)

 

Solve `A x = b` for `x` using iterative improvement 

""" 

function iterimprove{T<:AbstractFloat}(A::SparseMatrixCSC{T},
>b::Vector{T},n=1,verbose=true)
>
 eps(T) < eps(Float64) || throw(ArgumentError("wrong implementation")) 

 A0 = SparseMatrixCSC{Float64}(A)
> F = factorize(A0)
> x = zeros(b)
> r = copy(b)
> for iter = 1:n+1
> y = F \ Vector{Float64}(r)
> for i in eachindex(x)
> x[i] += y[i]
> end
> r = b - A * x
> if verbose
> @printf "at iter %d resnorm = %.3g\n" iter norm(r)
> end
> end
> x
> end



On Wednesday, August 10, 2016 at 3:47:10 PM UTC-4, Nicklas Andersen wrote:
>
> Hello
>
> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = b.
> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
> of my matrices and Julias built in backslash ' \ ' operator for the 
> solution of the system.
> I need *quadruple precision* and thus I've been trying to implement my 
> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>
> To illustrate this, I give a simple example here:
> set_bigfloat_precision(128);
> A  = speye(BigFloat, 2, 2);
> b = ones(BigFloat, 2, 1);
> x = A\b;
>
> If I do this I either get a StackOverFlow error:
> ERROR: StackOverflowError:
>  in copy at array.jl:100
>  in float at sparse/sparsematrix.jl:234
>  in call at essentials.jl:57 (repeats 254 times)
>
> or the solver seems to run forever and never terminates. As the second 
> error indicates it seems like the sparse solver only accepts the normal 
> *float* types.
> My question is then, is there a way to get quadruple precision with the 
> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
> for something else (in this case any suggestions :) ) ?
>
> Regards Nicklas A.
>
>

Re: [julia-users] Re: Advice for parallel computing

2016-08-10 Thread Islam Badreldin


You can also try `DistributedArrays` and `map`. Basically, you first distribute 
the array of your composite type on multiple Julia processes, then you call 
`map` on the distributed array.
See the first talk by Andreas Noack in the Parallel Computing workshop for more 
details http://youtu.be/euZkvgx0fG8
  -Islam
_
From: André Lage 
Sent: Wednesday, August 10, 2016 8:04 PM
Subject: [julia-users] Re: Advice for parallel computing
To: julia-users 


hi Dupont,
I would first check if ParallelAccelerator.jl does what you need:
https://github.com/IntelLabs/ParallelAccelerator.jl
Best,

André Lage.
On Wednesday, July 6, 2016 at 7:26:10 AM UTC-3, Dupont wrote:Hi,

I have an array of composite type

A = Array{MyType}(N)
that have been initialized somewhere else.

Currently, I am doing this operation

for i=1:N
    doit!(A[i])
end

I would like to perform this operation in parallel (threads if possible on a 
single computer) but SharedArray does not accept composite types.

Thank you for your suggestions,

R






[julia-users] Re: BigInt / BigFloat / Bitshift

2016-08-10 Thread Jeffrey Sarnoff

>
> What is the difference between >> and >>> ?

 
from the help (in the REPL, at the prompt press the question mark and then 
enter >> or >>>)

  >>(x, n)

  Right bit shift operator, x >> n. For n >= 0, the result is x shifted 
right by n bits, where
  n >= 0, filling with 0s if x >= 0, 1s if x < 0, preserving the sign of x. 
This is equivalent
  to fld(x, 2^n). For n < 0, this is equivalent to x << -n.

  julia> Int8(13) >> 2
  3
  
  julia> bits(Int8(13))
  "1101"
  
  julia> bits(Int8(3))
  "0011"
  
  julia> Int8(-14) >> 2
  -4

>>>(x, n)

  Unsigned right bit shift operator, x >>> n. For n >= 0, the result is x 
shifted right by n
  bits, where n >= 0, filling with 0s. For n < 0, this is equivalent to x 
[<<](:func:<<) -n].

  For Unsigned integer types, this is eqivalent to >>. For Signed integer 
types, this is
  equivalent to signed(unsigned(x) >> n).

  julia> Int8(-14) >>> 2
  60
  
  julia> bits(Int8(-14))
  "0010"
  






[julia-users] Re: Still confused about how to use pmap with sharedarrays in Julia

2016-08-10 Thread 'Greg Plowman' via julia-users
I have also found the combination of shared arrays, anonymous functions and 
parallel constructs confusing.

StackOverflow question helped me Shared array usage in Julia 


In essence, "although the underlying data is shared to all workers, the 
declaration is not. You will still need to pass in the reference to [the 
shared array]"

S = SharedArray(Int,3,4)
S[:] = 1:length(S)
@everywhere f(S,i) = (println("S[$i] = $(S[i])"); S[i])

output1 = pmap(i -> f(S,i), 1:length(S))# error
output2 = pmap((S,i) -> f(S,i), fill(S,length(S)), 1:length(S))
output3 = pmap(f, fill(S,length(S)), 1:length(S))


In seems then, in version1, the reference S is local to the worker, but S 
is not defined on the worker -> error.
In versions 2&3 the local S is passed as argument to worker and all works 
as expected.



[julia-users] Re: Advice for parallel computing

2016-08-10 Thread André Lage
hi Dupont,

I would first check if ParallelAccelerator.jl does what you need:

https://github.com/IntelLabs/ParallelAccelerator.jl

Best,


André Lage.

On Wednesday, July 6, 2016 at 7:26:10 AM UTC-3, Dupont wrote:
>
> Hi,
>
> I have an array of composite type
>
> A = Array{MyType}(N)
>
> that have been initialized somewhere else.
>
> Currently, I am doing this operation
>
> for i=1:N
> doit!(A[i])
> end
>
> I would like to perform this operation in parallel (threads if possible on 
> a single computer) but SharedArray does not accept composite types.
>
> Thank you for your suggestions,
>
> R
>
>

Re: [julia-users] VideoIO failing to import in julia 0.4.6 (Fedora 24)

2016-08-10 Thread Kevin Squire
You could install an older version of ffmpeg.

I've made progress in wrapping later versions, and just need time to finish
it up. I'll post a WIP pull request soon, and hopefully finish it soon as
well (but I've been saying that for a while).

Cheers,
   Kevin

On Wed, Aug 10, 2016 at 3:31 PM, Shivkumar Chandrasekaran <00s...@gmail.com>
wrote:

> On "import VideoIO" here is the relevant piece of the error:
> ERROR: LoadError: LoadError: could not open file /home/shiv/.julia/v0.4/
> VideoIO/src/ffmpeg/AVUtil/src/../../../ffmpeg/AVUtil/v55/LIBAVUTIL.jl
>
> The problem is that it is looking for "v55" directory but even github
> seems to have only "v54" directory.
>
> Any idea how this can be fixed?
>
> Thanks.
>
> --shiv--
>


[julia-users] Still confused about how to use pmap with sharedarrays in Julia

2016-08-10 Thread Graydon Barz
What is the best way to modify the following Julia code:

my_output = pmap(x -> my_function(x, my_shared_array), x_list)

so that 'output' is consistent with 

my_output = map(x -> my_function(x, myarray), x_list)?

My naive attempt above resulted in a an error:  
RemoteException(pid#, CapturedException(UndefVarError: 
my_shared_array),...  which I understand from other posts is because the 
other processes do not know of my_shared_array.

However, attempts to use various permutations of @sync @async have resulted 
in output = Task (done) @x0007 etc., and not the desired 'output' list. 




Re: [julia-users] GlobalRef(Module, func) vs. :(Module.func)

2016-08-10 Thread Yichao Yu
On Thu, Aug 11, 2016 at 6:40 AM, Andrei Zh 
wrote:

> One more question related to the topic. I try to get function body
> expression in Julia 0.5 using:
>
> lambda = methods(func, types).ms[1].lambda_template
> Base.uncompressed_ast(lambda)
>
> However this gives an expression with argument names replaced by
> `SlotNumber` (e.g. `_2`). Is there a way to get function expression with
> actual variable names or a recommended way to convert `SlotNumbers` to it?
> So far I've only found this piece of code
> 
> in ReverseDiffSource.jl, but the package is currently broken under Julia
> 0.5, so I suppose there might be more stable way to do it.
>

`slotnames` of LambdaInfo should have what you want.
Do note that the name is not unique.


>
>
> On Monday, August 8, 2016 at 10:39:34 PM UTC+3, Andrei Zh wrote:
>>
>> Thanks, this makes sense. Just for clarification:
>>
>>
>>> It strongly depend on what you want to do and whether you care about
>>> what they represent.
>>>
>>
>> I want to apply transformations (mostly) to algebraic expressions and
>> function calls in order to simplify them, replace argument names, find
>> derivatives, etc. Thus I don't really care about the difference between
>> GlobalRef and dot notation as long as they contain the same amount of
>> information and I can evaluate them to actual function objects. But since
>> there's a choice between 2 options I was trying to understand which one is
>> more suitable for me and whether I can cast one to another (again, in the
>> context of my simple transformations).
>>
>> Now based on what you've told about optimization and
>> `Base.uncompressed_ast()` I believe these 2 options - GloablRef and dot
>> notation - are interchangeable for my use case.
>>
>>
>>> The devdoc should have fairly complete description of what these nodes
>>> are.
>>>
>>
>> What they are - yes, but not when to use which one. Anyway, I didn't
>> intend to abuse anyone, just tried to get my head around AST internals.
>>
>>
>>
>>>
>>>



 On Monday, August 8, 2016 at 1:55:51 AM UTC+3, Yichao Yu wrote:
>
>
>
> On Mon, Aug 8, 2016 at 3:57 AM, Andrei Zh 
> wrote:
>
>> While parsing Julia expressions, I noticed that sometimes calls to
>>
>
> This shouldn't happen.
>
>
>> global functions resolve to `GloablRef` and sometimes to
>>
>
> and GlobalRef should only happen during lowering.
>
>
>> getfield(Module, func). Could somebody please clarify:
>>
>> 1. Why do we need both?
>>
>
> GlobalRef is essentially an optimization. It's more restricted and
> easier to interpret/codegen and is emitted by lowering/type inference when
> it is legal to do so.
>
>
>> 2. Is it safe to replace one by the other (assuming only modules and
>> functions are involved)?
>>
>
> Not always. There are certainly cases where a GlobalRef currently
> can't be used (method definition for example) I'm not sure if there's 
> cases
> the other way around.
>
>
>>>


Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Erik Schnetter
The upshot of the discussions seems to be "it won't work in 0.5 because the
experts say so, and there are no plans to change that". So I'm going to
accept that statement.

I think I'll use the following work-around:
```Julia
immutable Wrapper{Tag,Types}
data::Types
end
```
where I use `Tag` as `Val{Symbol}` to generate many distinct types, and
`Types` will be a tuple. This allows me to "generate" any immutable type. I
won't be able to access fields via the `.fieldname` syntax, but that is not
important to me.

-erik



On Wed, Aug 10, 2016 at 6:09 PM, Tim Holy  wrote:

> AFAICT, it remains possible to do dynamic type generation if you (1) print
> the
> code that would define the type to a file, and (2) `include` the file.
>
> function create_type_dynamically{T}(::Type{T})
> type_name = string("MyType", T)
> isdefined(Main, Symbol(type_name)) && return nothing
> filename = joinpath(tempdir(), string(T))
> open(filename, "w") do io
> println(io, """
> type $type_name
> val::$T
> end
> """)
> end
> eval(include(filename))
> nothing
> end
>
> Is this somehow less evil than doing it in a generated function?
>
> Best,
> --Tim
>
> On Wednesday, August 10, 2016 9:49:23 PM CDT Jameson Nash wrote:
> > > Why is it impossible to generate a new type at run time? I surely can
> do
> >
> > this by calling `eval` at module scope.
> >
> > module scope is compile time != runtime
> >
> > > Or I could create a type via a macro.
> >
> > Again, compile time != runtime
> >
> > > Given this, I can also call `eval` in a function, if I ensure the
> >
> > function is called only once.
> >
> > > Note that I've been doing this in Julia 0.4 without any (apparent)
> >
> > problems.
> >
> > Sure, I'm just here to tell you why it won't work that way in v0.5
> >
> > > I'm not defining thousands of types in my code. I define one type, and
> >
> > use it all over the place. However, each time my code runs (for days!),
> it
> > defines a different type, chosen by a set of user parameters. I'm also
> not
> > adding constraints to type parameters -- the type parameters are just
> `Int`
> > values.
> >
> > Right, the basic tradeoff required here is that you just need to provide
> a
> > convenient way for your user to declare the type at the toplevel that
> will
> > be used for the run. For example, you can just JIT the code for the whole
> > run at the beginning:
> >
> > function do_run()
> >   return @eval begin
> >  lots of function definitions
> >  do_work()
> >   end
> > end
> >
> > On Wed, Aug 10, 2016 at 5:14 PM Erik Schnetter 
> wrote:
> > > On Wed, Aug 10, 2016 at 1:45 PM, Jameson  wrote:
> > >> AFAIK, defining an arbitrary new type at runtime is impossible,
> sorry. In
> > >> v0.4 it was allowed, because we hoped that people understood not to
> try.
> > >> See also https://github.com/JuliaLang/julia/issues/16806. Note that
> it
> > >> is insufficient to "handle" the repeat calling via caching in a Dict
> or
> > >> similar such mechanism. It must always compute the exact final output
> > >> from
> > >> the input values alone (e.g. it must truly be const pure).
> > >
> > > The generated function first calculates the name of the type, then
> checks
> > > (`isdefined`) if this type is defined, and if so, returns it.
> Otherwise it
> > > is defined and then returned. This corresponds to looking up the type
> via
> > > `eval(typename)` (a symbol). I assume this is as pure as it gets.
> > >
> > > Why is it impossible to generate a new type at run time? I surely can
> do
> > > this by calling `eval` at module scope. Or I could create a type via a
> > > macro. Given this, I can also call `eval` in a function, if I ensure
> the
> > > function is called only once. Note that I've been doing this in Julia
> 0.4
> > > without any (apparent) problems.
> > >
> > > Being able to define types with arbitrary constraints in the type
> > >
> > >> parameters works OK for toy demos, but it's intentionally rather
> > >> difficult
> > >> since it causes performance issues at scale. Operations on Array are
> > >> likely
> > >> to be much faster (including the allocation) than on Tuple (due to the
> > >> cost
> > >> of *not* allocating) unless that Tuple is very small.
> > >
> > > I'm not defining thousands of types in my code. I define one type, and
> use
> > > it all over the place. However, each time my code runs (for days!), it
> > > defines a different type, chosen by a set of user parameters. I'm also
> not
> > > adding constraints to type parameters -- the type parameters are just
> > > `Int`
> > > values.
> > >
> > > And yes, I am using a mutable `Vector{T}` as underlying storage, that's
> > > not the issue here. The speedup comes from knowing the size of the
> array
> > > ahead of time, which allows the compiler to optimize indexing
> expressions.
> > > I've benchmarked it, and examined the generated machine code. There's
> no

Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Jameson Nash
The `eval` there is redundant with `include`. But there's nothing wrong
with running julia code during compile-time to dynamically generate code.
The confusion there comes from the fact that Julia has a compile-step
between each toplevel statement, which allows you to intermix runtime and
compile. However, if you tried to do the same from inside `@generated`,
then there is only portion of a compile-step before the runtime, so some
constructs can't be handled.

In v0.6, it's possible we could allow the creation of some types (many
types are simple enough they could be constructed without needing to actual
execute the definition). However, the fix for #265 will soon bar you from
calling the constructor of this type, so I'm not sure what would be gained.

> This isn't much of an explanation. I think this would seem like less of a
"because I say so" if you provided an explanation of what the problem with
calling eval within a generated function is, rather than just an assertion
that it can't work.

The definition of a generated function is "a pure function which computes a
function body based solely upon properties of the input argument types".
Since `eval` isn't pure, that would violate the definition of what
`@generated` is, and therefore it isn't usable. This isn't supposed to be
complicated, what an `@generated` function is actually supposed to be
allowed to do is just extremely limited, to make sure the system doesn't
fall apart on edge cases.



On Wed, Aug 10, 2016 at 6:10 PM Tim Holy  wrote:

> AFAICT, it remains possible to do dynamic type generation if you (1) print
> the
> code that would define the type to a file, and (2) `include` the file.
>
> function create_type_dynamically{T}(::Type{T})
> type_name = string("MyType", T)
> isdefined(Main, Symbol(type_name)) && return nothing
> filename = joinpath(tempdir(), string(T))
> open(filename, "w") do io
> println(io, """
> type $type_name
> val::$T
> end
> """)
> end
> eval(include(filename))
> nothing
> end
>
> Is this somehow less evil than doing it in a generated function?
>
> Best,
> --Tim
>
> On Wednesday, August 10, 2016 9:49:23 PM CDT Jameson Nash wrote:
> > > Why is it impossible to generate a new type at run time? I surely can
> do
> >
> > this by calling `eval` at module scope.
> >
> > module scope is compile time != runtime
> >
> > > Or I could create a type via a macro.
> >
> > Again, compile time != runtime
> >
> > > Given this, I can also call `eval` in a function, if I ensure the
> >
> > function is called only once.
> >
> > > Note that I've been doing this in Julia 0.4 without any (apparent)
> >
> > problems.
> >
> > Sure, I'm just here to tell you why it won't work that way in v0.5
> >
> > > I'm not defining thousands of types in my code. I define one type, and
> >
> > use it all over the place. However, each time my code runs (for days!),
> it
> > defines a different type, chosen by a set of user parameters. I'm also
> not
> > adding constraints to type parameters -- the type parameters are just
> `Int`
> > values.
> >
> > Right, the basic tradeoff required here is that you just need to provide
> a
> > convenient way for your user to declare the type at the toplevel that
> will
> > be used for the run. For example, you can just JIT the code for the whole
> > run at the beginning:
> >
> > function do_run()
> >   return @eval begin
> >  lots of function definitions
> >  do_work()
> >   end
> > end
> >
> > On Wed, Aug 10, 2016 at 5:14 PM Erik Schnetter 
> wrote:
> > > On Wed, Aug 10, 2016 at 1:45 PM, Jameson  wrote:
> > >> AFAIK, defining an arbitrary new type at runtime is impossible,
> sorry. In
> > >> v0.4 it was allowed, because we hoped that people understood not to
> try.
> > >> See also https://github.com/JuliaLang/julia/issues/16806. Note that
> it
> > >> is insufficient to "handle" the repeat calling via caching in a Dict
> or
> > >> similar such mechanism. It must always compute the exact final output
> > >> from
> > >> the input values alone (e.g. it must truly be const pure).
> > >
> > > The generated function first calculates the name of the type, then
> checks
> > > (`isdefined`) if this type is defined, and if so, returns it.
> Otherwise it
> > > is defined and then returned. This corresponds to looking up the type
> via
> > > `eval(typename)` (a symbol). I assume this is as pure as it gets.
> > >
> > > Why is it impossible to generate a new type at run time? I surely can
> do
> > > this by calling `eval` at module scope. Or I could create a type via a
> > > macro. Given this, I can also call `eval` in a function, if I ensure
> the
> > > function is called only once. Note that I've been doing this in Julia
> 0.4
> > > without any (apparent) problems.
> > >
> > > Being able to define types with arbitrary constraints in the type
> > >
> > >> parameters works OK for toy demos, but it's intentionally 

Re: [julia-users] GlobalRef(Module, func) vs. :(Module.func)

2016-08-10 Thread Andrei Zh
One more question related to the topic. I try to get function body 
expression in Julia 0.5 using: 

lambda = methods(func, types).ms[1].lambda_template
Base.uncompressed_ast(lambda)

However this gives an expression with argument names replaced by 
`SlotNumber` (e.g. `_2`). Is there a way to get function expression with 
actual variable names or a recommended way to convert `SlotNumbers` to it? 
So far I've only found this piece of code 

 
in ReverseDiffSource.jl, but the package is currently broken under Julia 
0.5, so I suppose there might be more stable way to do it. 


On Monday, August 8, 2016 at 10:39:34 PM UTC+3, Andrei Zh wrote:
>
> Thanks, this makes sense. Just for clarification: 
>  
>
>> It strongly depend on what you want to do and whether you care about what 
>> they represent.
>>
>
> I want to apply transformations (mostly) to algebraic expressions and 
> function calls in order to simplify them, replace argument names, find 
> derivatives, etc. Thus I don't really care about the difference between 
> GlobalRef and dot notation as long as they contain the same amount of 
> information and I can evaluate them to actual function objects. But since 
> there's a choice between 2 options I was trying to understand which one is 
> more suitable for me and whether I can cast one to another (again, in the 
> context of my simple transformations). 
>  
> Now based on what you've told about optimization and 
> `Base.uncompressed_ast()` I believe these 2 options - GloablRef and dot 
> notation - are interchangeable for my use case. 
>  
>
>> The devdoc should have fairly complete description of what these nodes 
>> are. 
>>
>
> What they are - yes, but not when to use which one. Anyway, I didn't 
> intend to abuse anyone, just tried to get my head around AST internals. 
>
>  
>
>>  
>>
>>>
>>>
>>>
>>> On Monday, August 8, 2016 at 1:55:51 AM UTC+3, Yichao Yu wrote:



 On Mon, Aug 8, 2016 at 3:57 AM, Andrei Zh  wrote:

> While parsing Julia expressions, I noticed that sometimes calls to 
>

 This shouldn't happen.
  

> global functions resolve to `GloablRef` and sometimes to 
>

 and GlobalRef should only happen during lowering.
  

> getfield(Module, func). Could somebody please clarify:
>
> 1. Why do we need both? 
>

 GlobalRef is essentially an optimization. It's more restricted and 
 easier to interpret/codegen and is emitted by lowering/type inference when 
 it is legal to do so.
  

> 2. Is it safe to replace one by the other (assuming only modules and 
> functions are involved)? 
>

 Not always. There are certainly cases where a GlobalRef currently can't 
 be used (method definition for example) I'm not sure if there's cases the 
 other way around.


>>

[julia-users] Re: Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Kit Adams
Thanks for the heads up Palli.
I was using "real-time" in the sense of a process running continuously with 
new data coming in, flowing through signal processing chains and being 
displayed and recorded, rather than being absolutely time critical. If a 
Julia gc causes a momentary delay in one or more of the channels, that 
should not be a problem.

I agree that real-time isn't strictly about performance because the data is 
flowing through the system in a time limited way so all is ok so long as it 
is keeping up over the medium term.

But in our system, the same code is also used offline when opening files, 
and in this case the user has to wait around. So far I have found signal 
processing components implemented with Julia scripts to be within 10% of 
their C++ equivalents in performance.

On Wednesday, August 10, 2016 at 11:34:54 PM UTC+12, Páll Haraldsson wrote:
>
> On Wednesday, August 10, 2016 at 6:57:15 AM UTC, Kit Adams wrote:
>>
>> I am investigating the feasibility of embedding Julia in a C++ real-time 
>> signal processing framework, using Julia-0.4.6 (BTW, the performance is 
>> looking amazing).
>>
>
> There are other thread[s] on Julia and real-time, I'll not repeat here. 
> You are ok with Julia for real-time work? Julia strictly isn't real-time, 
> you just have to be careful.
>
> I'm not looking into your embedding/GC issues as I'm not too familiar 
> with. Seems to me embedding doesn't fundamentally change that the GC isn't 
> real-time. And real-time isn't strictly about the performance.
>
>
>> However, for this usage I need to retain Julia state variables across c++ 
>> function calls, so the stack based JL_GC_PUSH() and JL_GC_POP() are not 
>> sufficient. 
>> When I injected some jl_gc_collect() calls for testing purposes, to 
>> simulate having multiple Julia scripts running (from the same thread), I 
>> got crashes, which I was able to fix using e.g. jl_gc_preserve(mMyState); 
>> and appropriate matching jl_gc_unpreserve() calls.
>>
>> I see these functions have been removed from the latest Julia version. 
>>
>> Is there an alternative that allows Julia values to be retained in a C++ 
>> app across gc calls?
>>
>
> -- 
> Palli.
>  
>


[julia-users] VideoIO failing to import in julia 0.4.6 (Fedora 24)

2016-08-10 Thread Shivkumar Chandrasekaran
On "import VideoIO" here is the relevant piece of the error:
ERROR: LoadError: LoadError: could not open file 
/home/shiv/.julia/v0.4/VideoIO/src/ffmpeg/AVUtil/src/../../../ffmpeg/AVUtil/v55/LIBAVUTIL.jl

The problem is that it is looking for "v55" directory but even github seems 
to have only "v54" directory.

Any idea how this can be fixed?

Thanks.

--shiv--


Re: [julia-users] Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Kit Adams
Thanks for those ideas Bart.
I think I will try stashing an index into the array in the smart pointer 
(along side the pointer) and use your idea of a stack of indices to the 
available null entries.
I'm currently working on embedding v8 for comparison purposes, so I won't 
do this immediately.


On Wednesday, August 10, 2016 at 11:06:23 PM UTC+12, Bart Janssens wrote:
>
> On Wed, Aug 10, 2016 at 11:46 AM Kit Adams  > wrote:
>
>> Thank you for those links, they are a great help.
>>
>> Is there an "unprotect_from_gc(T* val)"?
>>
>> I am looking for a smart pointer a bit like v8's UniquePersistent<>.
>>
>> I guess I could make one that searched through the array for the value in 
>> order to remove it (in the smart pointer's dtor).
>>
>>
> No, I didn't need unprotect so far. Iterating over the array is one way, 
> another option would be to keep a std::map along with the array to 
> immediately find the index. I think removing elements from the array is 
> cumbersome, it's probably best to set the value to null (by assigning a 
> jl_box_voidpointer(nullptr), setting a jl_value_t* to null directly feels 
> wrong) and depending on how many times objects are added / removed keep a 
> stack of the empty spots in the array so they can be reused.
>
>

[julia-users] Re: BigInt / BigFloat / Bitshift

2016-08-10 Thread digxx
What is the difference between >> and >>> ?


[julia-users] Re: BigInt / BigFloat / Bitshift

2016-08-10 Thread digxx
Sorry:
for example (n/2)^10 is still legid up to the last digit (neglecting the 
fact that these would be zero in this case anyway)

that was actually a question though I missed the question marks.

about the bitshift I found it being 4<<2 for example but is there sth like 
that for any basis?


[julia-users] BigInt / BigFloat / Bitshift

2016-08-10 Thread digxx
As far as I understand Julia adjusts the bitsize of an integer when I use 
BigInt.
For example n=big(10)^1 is as many bits as needed.
Now 2*n is still an integer though dividing makes a bigfloat out of it. How 
big is the bigfloat? does it "resolve" up to the last integer? Does it 
increase with size like bigint would?
for example (n/2)^10 is still legid up to the last digit (neglecting the 
fact that these would be zero in this case anyway)
When dividing n by an integer and the result is an an integer, how would I 
achieve this?
Is there already a simple way?
For example I could represent n in some basis b and look where the right 
most digit is zero and then bitshift, but the bitshift I found in the 
manual mainly << >> .>> etc.. do not work or I'm too stupid
how am I supposed to use these?
3 << ?


Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Tim Holy
AFAICT, it remains possible to do dynamic type generation if you (1) print the 
code that would define the type to a file, and (2) `include` the file.

function create_type_dynamically{T}(::Type{T})
type_name = string("MyType", T)
isdefined(Main, Symbol(type_name)) && return nothing
filename = joinpath(tempdir(), string(T))
open(filename, "w") do io
println(io, """
type $type_name
val::$T
end
""")
end
eval(include(filename))
nothing
end

Is this somehow less evil than doing it in a generated function?

Best,
--Tim

On Wednesday, August 10, 2016 9:49:23 PM CDT Jameson Nash wrote:
> > Why is it impossible to generate a new type at run time? I surely can do
> 
> this by calling `eval` at module scope.
> 
> module scope is compile time != runtime
> 
> > Or I could create a type via a macro.
> 
> Again, compile time != runtime
> 
> > Given this, I can also call `eval` in a function, if I ensure the
> 
> function is called only once.
> 
> > Note that I've been doing this in Julia 0.4 without any (apparent)
> 
> problems.
> 
> Sure, I'm just here to tell you why it won't work that way in v0.5
> 
> > I'm not defining thousands of types in my code. I define one type, and
> 
> use it all over the place. However, each time my code runs (for days!), it
> defines a different type, chosen by a set of user parameters. I'm also not
> adding constraints to type parameters -- the type parameters are just `Int`
> values.
> 
> Right, the basic tradeoff required here is that you just need to provide a
> convenient way for your user to declare the type at the toplevel that will
> be used for the run. For example, you can just JIT the code for the whole
> run at the beginning:
> 
> function do_run()
>   return @eval begin
>  lots of function definitions
>  do_work()
>   end
> end
> 
> On Wed, Aug 10, 2016 at 5:14 PM Erik Schnetter  wrote:
> > On Wed, Aug 10, 2016 at 1:45 PM, Jameson  wrote:
> >> AFAIK, defining an arbitrary new type at runtime is impossible, sorry. In
> >> v0.4 it was allowed, because we hoped that people understood not to try.
> >> See also https://github.com/JuliaLang/julia/issues/16806. Note that it
> >> is insufficient to "handle" the repeat calling via caching in a Dict or
> >> similar such mechanism. It must always compute the exact final output
> >> from
> >> the input values alone (e.g. it must truly be const pure).
> > 
> > The generated function first calculates the name of the type, then checks
> > (`isdefined`) if this type is defined, and if so, returns it. Otherwise it
> > is defined and then returned. This corresponds to looking up the type via
> > `eval(typename)` (a symbol). I assume this is as pure as it gets.
> > 
> > Why is it impossible to generate a new type at run time? I surely can do
> > this by calling `eval` at module scope. Or I could create a type via a
> > macro. Given this, I can also call `eval` in a function, if I ensure the
> > function is called only once. Note that I've been doing this in Julia 0.4
> > without any (apparent) problems.
> > 
> > Being able to define types with arbitrary constraints in the type
> > 
> >> parameters works OK for toy demos, but it's intentionally rather
> >> difficult
> >> since it causes performance issues at scale. Operations on Array are
> >> likely
> >> to be much faster (including the allocation) than on Tuple (due to the
> >> cost
> >> of *not* allocating) unless that Tuple is very small.
> > 
> > I'm not defining thousands of types in my code. I define one type, and use
> > it all over the place. However, each time my code runs (for days!), it
> > defines a different type, chosen by a set of user parameters. I'm also not
> > adding constraints to type parameters -- the type parameters are just
> > `Int`
> > values.
> > 
> > And yes, I am using a mutable `Vector{T}` as underlying storage, that's
> > not the issue here. The speedup comes from knowing the size of the array
> > ahead of time, which allows the compiler to optimize indexing expressions.
> > I've benchmarked it, and examined the generated machine code. There's no
> > doubt that generating a type is the "right thing" to do in this case.
> > 
> > -erik
> > 
> > On Wednesday, August 10, 2016 at 1:25:15 PM UTC-4, Erik Schnetter wrote:
> >>> I want to create a type, and need more flexibility than Julia's `type`
> >>> definitions offer (see ).
> >>> Currently, I have a function that generates the type, and returns the
> >>> type.
> >>> 
> >>> I would like to make this a generated function (as it was in Julia 0.4).
> >>> The advantage is that this leads to type stability: The generated type
> >>> only
> >>> depends on the types of the arguments pass to the function, and Julia
> >>> would
> >>> be able to infer the type.
> >>> 
> >>> In practice, this looks like
> >>> 
> >>> using FastArrays
> >>> # A (10x10) fixed-size arraytypealias 

Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Stefan Karpinski
On Wed, Aug 10, 2016 at 5:49 PM, Jameson Nash  wrote:

> module scope is compile time != runtime


This isn't much of an explanation. I think this would seem like less of a
"because I say so" if you provided an explanation of what the problem with
calling eval within a generated function is, rather than just an assertion
that it can't work.


Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Jameson Nash
> Why is it impossible to generate a new type at run time? I surely can do
this by calling `eval` at module scope.

module scope is compile time != runtime

> Or I could create a type via a macro.

Again, compile time != runtime

> Given this, I can also call `eval` in a function, if I ensure the
function is called only once.

> Note that I've been doing this in Julia 0.4 without any (apparent)
problems.

Sure, I'm just here to tell you why it won't work that way in v0.5

> I'm not defining thousands of types in my code. I define one type, and
use it all over the place. However, each time my code runs (for days!), it
defines a different type, chosen by a set of user parameters. I'm also not
adding constraints to type parameters -- the type parameters are just `Int`
values.

Right, the basic tradeoff required here is that you just need to provide a
convenient way for your user to declare the type at the toplevel that will
be used for the run. For example, you can just JIT the code for the whole
run at the beginning:

function do_run()
  return @eval begin
 lots of function definitions
 do_work()
  end
end



On Wed, Aug 10, 2016 at 5:14 PM Erik Schnetter  wrote:

> On Wed, Aug 10, 2016 at 1:45 PM, Jameson  wrote:
>
>> AFAIK, defining an arbitrary new type at runtime is impossible, sorry. In
>> v0.4 it was allowed, because we hoped that people understood not to try.
>> See also https://github.com/JuliaLang/julia/issues/16806. Note that it
>> is insufficient to "handle" the repeat calling via caching in a Dict or
>> similar such mechanism. It must always compute the exact final output from
>> the input values alone (e.g. it must truly be const pure).
>>
>
> The generated function first calculates the name of the type, then checks
> (`isdefined`) if this type is defined, and if so, returns it. Otherwise it
> is defined and then returned. This corresponds to looking up the type via
> `eval(typename)` (a symbol). I assume this is as pure as it gets.
>
> Why is it impossible to generate a new type at run time? I surely can do
> this by calling `eval` at module scope. Or I could create a type via a
> macro. Given this, I can also call `eval` in a function, if I ensure the
> function is called only once. Note that I've been doing this in Julia 0.4
> without any (apparent) problems.
>
> Being able to define types with arbitrary constraints in the type
>> parameters works OK for toy demos, but it's intentionally rather difficult
>> since it causes performance issues at scale. Operations on Array are likely
>> to be much faster (including the allocation) than on Tuple (due to the cost
>> of *not* allocating) unless that Tuple is very small.
>>
>
> I'm not defining thousands of types in my code. I define one type, and use
> it all over the place. However, each time my code runs (for days!), it
> defines a different type, chosen by a set of user parameters. I'm also not
> adding constraints to type parameters -- the type parameters are just `Int`
> values.
>
> And yes, I am using a mutable `Vector{T}` as underlying storage, that's
> not the issue here. The speedup comes from knowing the size of the array
> ahead of time, which allows the compiler to optimize indexing expressions.
> I've benchmarked it, and examined the generated machine code. There's no
> doubt that generating a type is the "right thing" to do in this case.
>
> -erik
>
> On Wednesday, August 10, 2016 at 1:25:15 PM UTC-4, Erik Schnetter wrote:
>>>
>>> I want to create a type, and need more flexibility than Julia's `type`
>>> definitions offer (see ).
>>> Currently, I have a function that generates the type, and returns the type.
>>>
>>> I would like to make this a generated function (as it was in Julia 0.4).
>>> The advantage is that this leads to type stability: The generated type only
>>> depends on the types of the arguments pass to the function, and Julia would
>>> be able to infer the type.
>>>
>>> In practice, this looks like
>>>
>>> using FastArrays
>>> # A (10x10) fixed-size arraytypealias Arr2d_10x10 FastArray(1:10, 1:10)
>>> a2 = Arr2d_10x10{Float64}(:,:)
>>>
>>>
>>> In principle I'd like to write `FastArray{1:10, 1:10}` (with curly
>>> braces), but Julia doesn't offer sufficient flexibility for this. Hence I
>>> use a regular function.
>>>
>>> To generate the type in the function I need to call `eval`. (Yes, I'm
>>> aware that the function might be called multiple times, and I'm handling
>>> this.)
>>>
>>> Do you have a suggestion for a different solution?
>>>
>>> -erik
>>>
>>>
>>> On Wed, Aug 10, 2016 at 11:51 AM, Jameson  wrote:
>>>
 It is tracking the dynamic scope of the code generator, it doesn't care
 about what code you emit. The generator function must not cause any
 side-effects and must be entirely computed from the types of the inputs and
 not other global state. Over time, these conditions are likely to be 

[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Chris Rackauckas
Though I don't know if they have sparse algorithms. But they have a good 
base something there to help you get started making one...

On Wednesday, August 10, 2016 at 2:20:54 PM UTC-7, Chris Rackauckas wrote:
>
> GenericSVD.jl  has linear 
> solver routines which work for generic number types (like BigFloat). You 
> can use an SVD to solve the linear system. It's not as fast as other 
> methods, but you may find this useful.
>
> On Wednesday, August 10, 2016 at 12:47:10 PM UTC-7, Nicklas Andersen wrote:
>>
>> Hello
>>
>> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = 
>> b.
>> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
>> of my matrices and Julias built in backslash ' \ ' operator for the 
>> solution of the system.
>> I need *quadruple precision* and thus I've been trying to implement my 
>> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>>
>> To illustrate this, I give a simple example here:
>> set_bigfloat_precision(128);
>> A  = speye(BigFloat, 2, 2);
>> b = ones(BigFloat, 2, 1);
>> x = A\b;
>>
>> If I do this I either get a StackOverFlow error:
>> ERROR: StackOverflowError:
>>  in copy at array.jl:100
>>  in float at sparse/sparsematrix.jl:234
>>  in call at essentials.jl:57 (repeats 254 times)
>>
>> or the solver seems to run forever and never terminates. As the second 
>> error indicates it seems like the sparse solver only accepts the normal 
>> *float* types.
>> My question is then, is there a way to get quadruple precision with the 
>> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
>> for something else (in this case any suggestions :) ) ?
>>
>> Regards Nicklas A.
>>
>>

[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Chris Rackauckas
GenericSVD.jl  has linear 
solver routines which work for generic number types (like BigFloat). You 
can use an SVD to solve the linear system. It's not as fast as other 
methods, but you may find this useful.

On Wednesday, August 10, 2016 at 12:47:10 PM UTC-7, Nicklas Andersen wrote:
>
> Hello
>
> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = b.
> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
> of my matrices and Julias built in backslash ' \ ' operator for the 
> solution of the system.
> I need *quadruple precision* and thus I've been trying to implement my 
> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>
> To illustrate this, I give a simple example here:
> set_bigfloat_precision(128);
> A  = speye(BigFloat, 2, 2);
> b = ones(BigFloat, 2, 1);
> x = A\b;
>
> If I do this I either get a StackOverFlow error:
> ERROR: StackOverflowError:
>  in copy at array.jl:100
>  in float at sparse/sparsematrix.jl:234
>  in call at essentials.jl:57 (repeats 254 times)
>
> or the solver seems to run forever and never terminates. As the second 
> error indicates it seems like the sparse solver only accepts the normal 
> *float* types.
> My question is then, is there a way to get quadruple precision with the 
> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
> for something else (in this case any suggestions :) ) ?
>
> Regards Nicklas A.
>
>

Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Erik Schnetter
On Wed, Aug 10, 2016 at 1:45 PM, Jameson  wrote:

> AFAIK, defining an arbitrary new type at runtime is impossible, sorry. In
> v0.4 it was allowed, because we hoped that people understood not to try.
> See also https://github.com/JuliaLang/julia/issues/16806. Note that it is
> insufficient to "handle" the repeat calling via caching in a Dict or
> similar such mechanism. It must always compute the exact final output from
> the input values alone (e.g. it must truly be const pure).
>

The generated function first calculates the name of the type, then checks
(`isdefined`) if this type is defined, and if so, returns it. Otherwise it
is defined and then returned. This corresponds to looking up the type via
`eval(typename)` (a symbol). I assume this is as pure as it gets.

Why is it impossible to generate a new type at run time? I surely can do
this by calling `eval` at module scope. Or I could create a type via a
macro. Given this, I can also call `eval` in a function, if I ensure the
function is called only once. Note that I've been doing this in Julia 0.4
without any (apparent) problems.

Being able to define types with arbitrary constraints in the type
> parameters works OK for toy demos, but it's intentionally rather difficult
> since it causes performance issues at scale. Operations on Array are likely
> to be much faster (including the allocation) than on Tuple (due to the cost
> of *not* allocating) unless that Tuple is very small.
>

I'm not defining thousands of types in my code. I define one type, and use
it all over the place. However, each time my code runs (for days!), it
defines a different type, chosen by a set of user parameters. I'm also not
adding constraints to type parameters -- the type parameters are just `Int`
values.

And yes, I am using a mutable `Vector{T}` as underlying storage, that's not
the issue here. The speedup comes from knowing the size of the array ahead
of time, which allows the compiler to optimize indexing expressions. I've
benchmarked it, and examined the generated machine code. There's no doubt
that generating a type is the "right thing" to do in this case.

-erik

On Wednesday, August 10, 2016 at 1:25:15 PM UTC-4, Erik Schnetter wrote:
>>
>> I want to create a type, and need more flexibility than Julia's `type`
>> definitions offer (see ).
>> Currently, I have a function that generates the type, and returns the type.
>>
>> I would like to make this a generated function (as it was in Julia 0.4).
>> The advantage is that this leads to type stability: The generated type only
>> depends on the types of the arguments pass to the function, and Julia would
>> be able to infer the type.
>>
>> In practice, this looks like
>>
>> using FastArrays
>> # A (10x10) fixed-size arraytypealias Arr2d_10x10 FastArray(1:10, 1:10)
>> a2 = Arr2d_10x10{Float64}(:,:)
>>
>>
>> In principle I'd like to write `FastArray{1:10, 1:10}` (with curly
>> braces), but Julia doesn't offer sufficient flexibility for this. Hence I
>> use a regular function.
>>
>> To generate the type in the function I need to call `eval`. (Yes, I'm
>> aware that the function might be called multiple times, and I'm handling
>> this.)
>>
>> Do you have a suggestion for a different solution?
>>
>> -erik
>>
>>
>> On Wed, Aug 10, 2016 at 11:51 AM, Jameson  wrote:
>>
>>> It is tracking the dynamic scope of the code generator, it doesn't care
>>> about what code you emit. The generator function must not cause any
>>> side-effects and must be entirely computed from the types of the inputs and
>>> not other global state. Over time, these conditions are likely to be more
>>> accurately enforced, as needed to make various optimizations reliable
>>> and/or correct.
>>>
>>>
>>>
>>> On Wednesday, August 10, 2016 at 10:48:31 AM UTC-4, Erik Schnetter wrote:

 I'm encountering the error "eval cannot be used in a generated
 function" in Julia 0.5 for code that is working in Julia 0.4. My question
 is -- what exactly is now disallowed? For example, if a generated function
 `f` calls another (non-generated) function `g`, can `g` then call `eval`?
 Does the word "in" here refer to the code that is generated by the
 generated function, or does it refer to the dynamical scope of the code
 generation state of the generated function?

 To avoid the error I have to redesign my code, and I'd like to know
 ahead of time what to avoid. A Google search only turned up the C file
 within Julia that emits the respective error message, as well as the Travis
 build log for my package.

 -erik

 --
 Erik Schnetter  http://www.perimeterinstitute.
 ca/personal/eschnetter/

>>>
>>
>>
>> --
>> Erik Schnetter  http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter 

[julia-users] Re: Problems with A\b and BigFloat

2016-08-10 Thread Kristoffer Carlsson
The sparse solvers use UMFPACK and CHOLMOD which are C-libraries and thus 
only support the standard number types. You would need a pure julia written 
solver that could take any number type.

The stackoverflow error was fixed 
here: https://github.com/JuliaLang/julia/pull/14902 

On Wednesday, August 10, 2016 at 9:47:10 PM UTC+2, Nicklas Andersen wrote:
>
> Hello
>
> I'm trying to solve a large, sparse and unsymmetrical linear system Ax = b.
> For this task I'm using Julias *SparseMatrixCSC *type for the definition 
> of my matrices and Julias built in backslash ' \ ' operator for the 
> solution of the system.
> I need *quadruple precision* and thus I've been trying to implement my 
> routine with the *BigFloat *type together with the SparseMatrixCSC type.
>
> To illustrate this, I give a simple example here:
> set_bigfloat_precision(128);
> A  = speye(BigFloat, 2, 2);
> b = ones(BigFloat, 2, 1);
> x = A\b;
>
> If I do this I either get a StackOverFlow error:
> ERROR: StackOverflowError:
>  in copy at array.jl:100
>  in float at sparse/sparsematrix.jl:234
>  in call at essentials.jl:57 (repeats 254 times)
>
> or the solver seems to run forever and never terminates. As the second 
> error indicates it seems like the sparse solver only accepts the normal 
> *float* types.
> My question is then, is there a way to get quadruple precision with the 
> standard solvers in Julia, in this case UMFpack I assume ? or should I look 
> for something else (in this case any suggestions :) ) ?
>
> Regards Nicklas A.
>
>

Re: [julia-users] Adding tuples

2016-08-10 Thread Kristoffer Carlsson
The Val{N} version will makes the compiler specialize the tuple creation 
for that specific value of N. It should be significantly faster for small 
lengths of the tuple and the return value should also be inferrable. 

On Wednesday, August 10, 2016 at 8:59:45 PM UTC+2, Jeffrey Sarnoff wrote:
>
> almost certainly -- I use BenchmarkTools for this sort of timing, and can 
> recommend it.
>
> On Wednesday, August 10, 2016 at 2:56:17 PM UTC-4, Bill Hart wrote:
>>
>> For me, the version using CartesianIndex is exactly the same speed as the 
>> syntax with the Val, which in turn is faster than without Val.
>>
>> It probably depends a lot on the application and what the compiler can 
>> handle.
>>
>> Bill.
>>
>> On 10 August 2016 at 20:45, Jeffrey Sarnoff  wrote:
>>
>>> relative to the same thing without the Val
>>>
>>>
>>> On Wednesday, August 10, 2016 at 2:45:06 PM UTC-4, Jeffrey Sarnoff wrote:

 that slows it down by a factor of 5

 On Wednesday, August 10, 2016 at 2:37:25 PM UTC-4, Bill Hart wrote:
>
> How about compared with:
>
> ntuple(i -> a[i] + b[i], Val{N})
>
>
> On 10 August 2016 at 20:32, Jeffrey Sarnoff  
> wrote:
>
>> Bill,
>>
>> Following Eric's note, I tried (with a,b equi-length tuples)
>> function addTuples(a,b)
>> ca = CartesianIndex(a)
>> cb = CartesianIndex(b)
>> return (ca+cb).I
>> end
>>
>>
>> for me, with 100 values it ran ~60% faster, and with 1000 values much 
>> much faster than  
>>  ntuple(i -> a[i] + b[i], N)
>>
>>
>>
>> On Wednesday, August 10, 2016 at 11:06:46 AM UTC-4, Bill Hart wrote:
>>>
>>> This code seems to be (about 50%) faster than recursive functions:
>>>
>>> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>>>
>>>
>>> But this seems (about 50%) slower:
>>>
>>>  ((a[i] + b[i] for i = 1:N)...)
>>>
>>>
>>> Anyway, I can use the first method, until I find something faster. 
>>> It's definitely way more convenient. Thanks.
>>>
>>> Bill. 
>>>
>>>
>>>
>>> On 10 August 2016 at 16:56, Erik Schnetter  
>>> wrote:
>>>
 The built-in type `CartesianIndex` supports adding and subtracting, 
 and presumably also multiplication. It is implemented very 
 efficiently, 
 based on tuples.

 Otherwise, to generate efficient code, you might have to make use 
 of "generated functions". These are similar to macros, but they know 
 about 
 the types upon which they act, and thus know the value of `N`. This is 
 a 
 bit low-level, so I'd use this only if (a) there is not other package 
 available, and (b) you have examined Julia's performance and found it 
 lacking.

 I would avoid overloading operators for `NTuple`, and instead us a 
 new immutable type, since overloading operations for Julia's tuples 
 can 
 have unintended side effects.

 -erik


 On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
 julia...@googlegroups.com> wrote:

> Does anyone know an efficient way to add NTuples in Julia?
>
> I can do it using recursive functions, but for various reasons 
> this is not efficient in my context. I really miss something like 
> tuple(a[i] + b[i] for i in 1:N) to create the resulting tuple all in 
> one go 
> (here a and b would be tuples).
>
> The compiler doesn't do badly with recursive functions for 
> handling tuples in very straightforward situations, but for example, 
> if I 
> want to create an immutable type based on a tuple the compiler 
> doesn't seem 
> to be able to handle the necessary optimisations. At least, that is 
> what I 
> infer from the timings. Consider
>
> immutable bill{N}
>d::NTuple{N, Int}
> end
>
> and I want to add two bill's together. If I have to add the tuples 
> themselves using recursive functions, then I no longer seem to be 
> able to 
> do something like:
>
> A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose 
> elements are of type bill.
>
> I know how to handle tuples via arrays, but for efficiency reasons 
> I certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 
> 1:N]...).
>
> Bill.
>



 -- 
 Erik Schnetter  
 http://www.perimeterinstitute.ca/personal/eschnetter/

>>>
>>>
>
>>

[julia-users] Problems with A\b and BigFloat

2016-08-10 Thread Nicklas Andersen
Hello

I'm trying to solve a large, sparse and unsymmetrical linear system Ax = b.
For this task I'm using Julias *SparseMatrixCSC *type for the definition of 
my matrices and Julias built in backslash ' \ ' operator for the solution 
of the system.
I need *quadruple precision* and thus I've been trying to implement my 
routine with the *BigFloat *type together with the SparseMatrixCSC type.

To illustrate this, I give a simple example here:
set_bigfloat_precision(128);
A  = speye(BigFloat, 2, 2);
b = ones(BigFloat, 2, 1);
x = A\b;

If I do this I either get a StackOverFlow error:
ERROR: StackOverflowError:
 in copy at array.jl:100
 in float at sparse/sparsematrix.jl:234
 in call at essentials.jl:57 (repeats 254 times)

or the solver seems to run forever and never terminates. As the second 
error indicates it seems like the sparse solver only accepts the normal 
*float* types.
My question is then, is there a way to get quadruple precision with the 
standard solvers in Julia, in this case UMFpack I assume ? or should I look 
for something else (in this case any suggestions :) ) ?

Regards Nicklas A.



[julia-users] Re: What do do when @less does not work?

2016-08-10 Thread Jeffrey Sarnoff
Páll
  FYI this is most current of the evolving approaches to extended precision 
https://github.com/JuliaArbTypes/ArbFloats.jl

On Wednesday, August 10, 2016 at 2:58:16 PM UTC-4, Jeffrey Sarnoff wrote:
>
> When one writes a module/package that implements a new type it is prudent 
> to provide a type-specific value hashing function.  This is particularly 
> important for immutable types where hash codes and isequal are more tightly 
> coupled than they are with mutable types.
>
> memhash_seed is a module/package specific constant that serves as the seed 
> value for hashing values of the new type.
> Here is an example, my type-specific hashing for values of type ArbFloat 
> (extended precision floating point types)
> where 'memhash_seed' is called has_arbfloat_lo
>  
>
> # a type specific hash function helps the type to 'just work'
> const hash_arbfloat_lo = (UInt === UInt64) ? 0x37e642589da3416a : 0x5d46a6b4
> const hash_0_arbfloat_lo = hash(zero(UInt), hash_arbfloat_lo)
> # two values of the same precision
> #with identical midpoint significands and identical radus_exponentOf2s 
> hash equal
> # they are the same value, one is less accurate yet centered about the other
> hash{P}(z::ArbFloat{P}, h::UInt) =
> hash(z.significand1$z.exponentOf2,
>  (h $ hash(z.significand2$(~reinterpret(UInt,P)), hash_arbfloat_lo)
> $ hash_0_arbfloat_lo))
>
>
>
>
> On Tuesday, August 9, 2016 at 11:49:24 AM UTC-4, Páll Haraldsson wrote:
>>
>>
>> A.
>>
>> julia> @less Base.memhash_seed
>> ERROR: ArgumentError: argument is not a generic function
>>  in methods at ./reflection.jl:140
>>
>> I know what this means, but it would be nice for it to look up the 
>> source, and when it's actually a function (sometimes, I'm not sure of the 
>> right parameters..). Is there a way for @less, @which @edit (I guess always 
>> same mechanisim) to work? Maybe already ion 0.5? Easy to add? I like 
>> reading the source code to learn, and not have to google or search github 
>> or ask here..
>>
>> [apropos got me nowhere.]
>>
>>
>> B.
>>
>> Bonus point, I think I know what memhash[_seed] is.. but what is it.
>>
>>
>> C.
>>
>> Trying to google it:
>>
>> https://searchcode.com/codesearch/view/84537847/
>>
>> Language  Lisp
>> [Clearly Julia source code, elsewhere I've seen Python autodetected..]
>>
>> -- 
>> Palli.
>>
>>

Re: [julia-users] Adding tuples

2016-08-10 Thread 'Bill Hart' via julia-users
For me, the version using CartesianIndex is exactly the same speed as the
syntax with the Val, which in turn is faster than without Val.

It probably depends a lot on the application and what the compiler can
handle.

Bill.

On 10 August 2016 at 20:45, Jeffrey Sarnoff 
wrote:

> relative to the same thing without the Val
>
>
> On Wednesday, August 10, 2016 at 2:45:06 PM UTC-4, Jeffrey Sarnoff wrote:
>>
>> that slows it down by a factor of 5
>>
>> On Wednesday, August 10, 2016 at 2:37:25 PM UTC-4, Bill Hart wrote:
>>>
>>> How about compared with:
>>>
>>> ntuple(i -> a[i] + b[i], Val{N})
>>>
>>>
>>> On 10 August 2016 at 20:32, Jeffrey Sarnoff 
>>> wrote:
>>>
 Bill,

 Following Eric's note, I tried (with a,b equi-length tuples)
 function addTuples(a,b)
 ca = CartesianIndex(a)
 cb = CartesianIndex(b)
 return (ca+cb).I
 end


 for me, with 100 values it ran ~60% faster, and with 1000 values much
 much faster than
  ntuple(i -> a[i] + b[i], N)



 On Wednesday, August 10, 2016 at 11:06:46 AM UTC-4, Bill Hart wrote:
>
> This code seems to be (about 50%) faster than recursive functions:
>
> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>
>
> But this seems (about 50%) slower:
>
>  ((a[i] + b[i] for i = 1:N)...)
>
>
> Anyway, I can use the first method, until I find something faster.
> It's definitely way more convenient. Thanks.
>
> Bill.
>
>
>
> On 10 August 2016 at 16:56, Erik Schnetter  wrote:
>
>> The built-in type `CartesianIndex` supports adding and subtracting,
>> and presumably also multiplication. It is implemented very efficiently,
>> based on tuples.
>>
>> Otherwise, to generate efficient code, you might have to make use of
>> "generated functions". These are similar to macros, but they know about 
>> the
>> types upon which they act, and thus know the value of `N`. This is a bit
>> low-level, so I'd use this only if (a) there is not other package
>> available, and (b) you have examined Julia's performance and found it
>> lacking.
>>
>> I would avoid overloading operators for `NTuple`, and instead us a
>> new immutable type, since overloading operations for Julia's tuples can
>> have unintended side effects.
>>
>> -erik
>>
>>
>> On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
>> julia...@googlegroups.com> wrote:
>>
>>> Does anyone know an efficient way to add NTuples in Julia?
>>>
>>> I can do it using recursive functions, but for various reasons this
>>> is not efficient in my context. I really miss something like tuple(a[i] 
>>> +
>>> b[i] for i in 1:N) to create the resulting tuple all in one go (here a 
>>> and
>>> b would be tuples).
>>>
>>> The compiler doesn't do badly with recursive functions for handling
>>> tuples in very straightforward situations, but for example, if I want to
>>> create an immutable type based on a tuple the compiler doesn't seem to 
>>> be
>>> able to handle the necessary optimisations. At least, that is what I 
>>> infer
>>> from the timings. Consider
>>>
>>> immutable bill{N}
>>>d::NTuple{N, Int}
>>> end
>>>
>>> and I want to add two bill's together. If I have to add the tuples
>>> themselves using recursive functions, then I no longer seem to be able 
>>> to
>>> do something like:
>>>
>>> A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose
>>> elements are of type bill.
>>>
>>> I know how to handle tuples via arrays, but for efficiency reasons I
>>> certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 
>>> 1:N]...).
>>>
>>> Bill.
>>>
>>
>>
>>
>> --
>> Erik Schnetter  http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>
>
>>>


Re: [julia-users] Adding tuples

2016-08-10 Thread Jeffrey Sarnoff
almost certainly -- I use BenchmarkTools for this sort of timing, and can 
recommend it.

On Wednesday, August 10, 2016 at 2:56:17 PM UTC-4, Bill Hart wrote:
>
> For me, the version using CartesianIndex is exactly the same speed as the 
> syntax with the Val, which in turn is faster than without Val.
>
> It probably depends a lot on the application and what the compiler can 
> handle.
>
> Bill.
>
> On 10 August 2016 at 20:45, Jeffrey Sarnoff  > wrote:
>
>> relative to the same thing without the Val
>>
>>
>> On Wednesday, August 10, 2016 at 2:45:06 PM UTC-4, Jeffrey Sarnoff wrote:
>>>
>>> that slows it down by a factor of 5
>>>
>>> On Wednesday, August 10, 2016 at 2:37:25 PM UTC-4, Bill Hart wrote:

 How about compared with:

 ntuple(i -> a[i] + b[i], Val{N})


 On 10 August 2016 at 20:32, Jeffrey Sarnoff  
 wrote:

> Bill,
>
> Following Eric's note, I tried (with a,b equi-length tuples)
> function addTuples(a,b)
> ca = CartesianIndex(a)
> cb = CartesianIndex(b)
> return (ca+cb).I
> end
>
>
> for me, with 100 values it ran ~60% faster, and with 1000 values much 
> much faster than  
>  ntuple(i -> a[i] + b[i], N)
>
>
>
> On Wednesday, August 10, 2016 at 11:06:46 AM UTC-4, Bill Hart wrote:
>>
>> This code seems to be (about 50%) faster than recursive functions:
>>
>> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>>
>>
>> But this seems (about 50%) slower:
>>
>>  ((a[i] + b[i] for i = 1:N)...)
>>
>>
>> Anyway, I can use the first method, until I find something faster. 
>> It's definitely way more convenient. Thanks.
>>
>> Bill. 
>>
>>
>>
>> On 10 August 2016 at 16:56, Erik Schnetter  wrote:
>>
>>> The built-in type `CartesianIndex` supports adding and subtracting, 
>>> and presumably also multiplication. It is implemented very efficiently, 
>>> based on tuples.
>>>
>>> Otherwise, to generate efficient code, you might have to make use of 
>>> "generated functions". These are similar to macros, but they know about 
>>> the 
>>> types upon which they act, and thus know the value of `N`. This is a 
>>> bit 
>>> low-level, so I'd use this only if (a) there is not other package 
>>> available, and (b) you have examined Julia's performance and found it 
>>> lacking.
>>>
>>> I would avoid overloading operators for `NTuple`, and instead us a 
>>> new immutable type, since overloading operations for Julia's tuples can 
>>> have unintended side effects.
>>>
>>> -erik
>>>
>>>
>>> On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
>>> julia...@googlegroups.com> wrote:
>>>
 Does anyone know an efficient way to add NTuples in Julia?

 I can do it using recursive functions, but for various reasons this 
 is not efficient in my context. I really miss something like 
 tuple(a[i] + 
 b[i] for i in 1:N) to create the resulting tuple all in one go (here a 
 and 
 b would be tuples).

 The compiler doesn't do badly with recursive functions for handling 
 tuples in very straightforward situations, but for example, if I want 
 to 
 create an immutable type based on a tuple the compiler doesn't seem to 
 be 
 able to handle the necessary optimisations. At least, that is what I 
 infer 
 from the timings. Consider

 immutable bill{N}
d::NTuple{N, Int}
 end

 and I want to add two bill's together. If I have to add the tuples 
 themselves using recursive functions, then I no longer seem to be able 
 to 
 do something like:

 A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose 
 elements are of type bill.

 I know how to handle tuples via arrays, but for efficiency reasons 
 I certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 
 1:N]...).

 Bill.

>>>
>>>
>>>
>>> -- 
>>> Erik Schnetter  
>>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>>
>>
>>

>

[julia-users] Re: What do do when @less does not work?

2016-08-10 Thread Jeffrey Sarnoff
When one writes a module/package that implements a new type it is prudent 
to provide a type-specific value hashing function.  This is particularly 
important for immutable types where hash codes and isequal are more tightly 
coupled than they are with mutable types.

memhash_seed is a module/package specific constant that serves as the seed 
value for hashing values of the new type.
Here is an example, my type-specific hashing for values of type ArbFloat 
(extended precision floating point types)
where 'memhash_seed' is called has_arbfloat_lo
 

# a type specific hash function helps the type to 'just work'
const hash_arbfloat_lo = (UInt === UInt64) ? 0x37e642589da3416a : 0x5d46a6b4
const hash_0_arbfloat_lo = hash(zero(UInt), hash_arbfloat_lo)
# two values of the same precision
#with identical midpoint significands and identical radus_exponentOf2s hash 
equal
# they are the same value, one is less accurate yet centered about the other
hash{P}(z::ArbFloat{P}, h::UInt) =
hash(z.significand1$z.exponentOf2,
 (h $ hash(z.significand2$(~reinterpret(UInt,P)), hash_arbfloat_lo)
$ hash_0_arbfloat_lo))




On Tuesday, August 9, 2016 at 11:49:24 AM UTC-4, Páll Haraldsson wrote:
>
>
> A.
>
> julia> @less Base.memhash_seed
> ERROR: ArgumentError: argument is not a generic function
>  in methods at ./reflection.jl:140
>
> I know what this means, but it would be nice for it to look up the source, 
> and when it's actually a function (sometimes, I'm not sure of the right 
> parameters..). Is there a way for @less, @which @edit (I guess always same 
> mechanisim) to work? Maybe already ion 0.5? Easy to add? I like reading the 
> source code to learn, and not have to google or search github or ask here..
>
> [apropos got me nowhere.]
>
>
> B.
>
> Bonus point, I think I know what memhash[_seed] is.. but what is it.
>
>
> C.
>
> Trying to google it:
>
> https://searchcode.com/codesearch/view/84537847/
>
> Language  Lisp
> [Clearly Julia source code, elsewhere I've seen Python autodetected..]
>
> -- 
> Palli.
>
>

Re: [julia-users] Adding tuples

2016-08-10 Thread Jeffrey Sarnoff
relative to the same thing without the Val

On Wednesday, August 10, 2016 at 2:45:06 PM UTC-4, Jeffrey Sarnoff wrote:
>
> that slows it down by a factor of 5
>
> On Wednesday, August 10, 2016 at 2:37:25 PM UTC-4, Bill Hart wrote:
>>
>> How about compared with:
>>
>> ntuple(i -> a[i] + b[i], Val{N})
>>
>>
>> On 10 August 2016 at 20:32, Jeffrey Sarnoff  wrote:
>>
>>> Bill,
>>>
>>> Following Eric's note, I tried (with a,b equi-length tuples)
>>> function addTuples(a,b)
>>> ca = CartesianIndex(a)
>>> cb = CartesianIndex(b)
>>> return (ca+cb).I
>>> end
>>>
>>>
>>> for me, with 100 values it ran ~60% faster, and with 1000 values much 
>>> much faster than  
>>>  ntuple(i -> a[i] + b[i], N)
>>>
>>>
>>>
>>> On Wednesday, August 10, 2016 at 11:06:46 AM UTC-4, Bill Hart wrote:

 This code seems to be (about 50%) faster than recursive functions:

 Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)


 But this seems (about 50%) slower:

  ((a[i] + b[i] for i = 1:N)...)


 Anyway, I can use the first method, until I find something faster. It's 
 definitely way more convenient. Thanks.

 Bill. 



 On 10 August 2016 at 16:56, Erik Schnetter  wrote:

> The built-in type `CartesianIndex` supports adding and subtracting, 
> and presumably also multiplication. It is implemented very efficiently, 
> based on tuples.
>
> Otherwise, to generate efficient code, you might have to make use of 
> "generated functions". These are similar to macros, but they know about 
> the 
> types upon which they act, and thus know the value of `N`. This is a bit 
> low-level, so I'd use this only if (a) there is not other package 
> available, and (b) you have examined Julia's performance and found it 
> lacking.
>
> I would avoid overloading operators for `NTuple`, and instead us a new 
> immutable type, since overloading operations for Julia's tuples can have 
> unintended side effects.
>
> -erik
>
>
> On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
> julia...@googlegroups.com> wrote:
>
>> Does anyone know an efficient way to add NTuples in Julia?
>>
>> I can do it using recursive functions, but for various reasons this 
>> is not efficient in my context. I really miss something like tuple(a[i] 
>> + 
>> b[i] for i in 1:N) to create the resulting tuple all in one go (here a 
>> and 
>> b would be tuples).
>>
>> The compiler doesn't do badly with recursive functions for handling 
>> tuples in very straightforward situations, but for example, if I want to 
>> create an immutable type based on a tuple the compiler doesn't seem to 
>> be 
>> able to handle the necessary optimisations. At least, that is what I 
>> infer 
>> from the timings. Consider
>>
>> immutable bill{N}
>>d::NTuple{N, Int}
>> end
>>
>> and I want to add two bill's together. If I have to add the tuples 
>> themselves using recursive functions, then I no longer seem to be able 
>> to 
>> do something like:
>>
>> A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose 
>> elements are of type bill.
>>
>> I know how to handle tuples via arrays, but for efficiency reasons I 
>> certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 
>> 1:N]...).
>>
>> Bill.
>>
>
>
>
> -- 
> Erik Schnetter  
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


>>

Re: [julia-users] Adding tuples

2016-08-10 Thread Jeffrey Sarnoff
that slows it down by a factor of 5

On Wednesday, August 10, 2016 at 2:37:25 PM UTC-4, Bill Hart wrote:
>
> How about compared with:
>
> ntuple(i -> a[i] + b[i], Val{N})
>
>
> On 10 August 2016 at 20:32, Jeffrey Sarnoff  > wrote:
>
>> Bill,
>>
>> Following Eric's note, I tried (with a,b equi-length tuples)
>> function addTuples(a,b)
>> ca = CartesianIndex(a)
>> cb = CartesianIndex(b)
>> return (ca+cb).I
>> end
>>
>>
>> for me, with 100 values it ran ~60% faster, and with 1000 values much 
>> much faster than  
>>  ntuple(i -> a[i] + b[i], N)
>>
>>
>>
>> On Wednesday, August 10, 2016 at 11:06:46 AM UTC-4, Bill Hart wrote:
>>>
>>> This code seems to be (about 50%) faster than recursive functions:
>>>
>>> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>>>
>>>
>>> But this seems (about 50%) slower:
>>>
>>>  ((a[i] + b[i] for i = 1:N)...)
>>>
>>>
>>> Anyway, I can use the first method, until I find something faster. It's 
>>> definitely way more convenient. Thanks.
>>>
>>> Bill. 
>>>
>>>
>>>
>>> On 10 August 2016 at 16:56, Erik Schnetter  wrote:
>>>
 The built-in type `CartesianIndex` supports adding and subtracting, and 
 presumably also multiplication. It is implemented very efficiently, based 
 on tuples.

 Otherwise, to generate efficient code, you might have to make use of 
 "generated functions". These are similar to macros, but they know about 
 the 
 types upon which they act, and thus know the value of `N`. This is a bit 
 low-level, so I'd use this only if (a) there is not other package 
 available, and (b) you have examined Julia's performance and found it 
 lacking.

 I would avoid overloading operators for `NTuple`, and instead us a new 
 immutable type, since overloading operations for Julia's tuples can have 
 unintended side effects.

 -erik


 On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
 julia...@googlegroups.com> wrote:

> Does anyone know an efficient way to add NTuples in Julia?
>
> I can do it using recursive functions, but for various reasons this is 
> not efficient in my context. I really miss something like tuple(a[i] + 
> b[i] 
> for i in 1:N) to create the resulting tuple all in one go (here a and b 
> would be tuples).
>
> The compiler doesn't do badly with recursive functions for handling 
> tuples in very straightforward situations, but for example, if I want to 
> create an immutable type based on a tuple the compiler doesn't seem to be 
> able to handle the necessary optimisations. At least, that is what I 
> infer 
> from the timings. Consider
>
> immutable bill{N}
>d::NTuple{N, Int}
> end
>
> and I want to add two bill's together. If I have to add the tuples 
> themselves using recursive functions, then I no longer seem to be able to 
> do something like:
>
> A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose 
> elements are of type bill.
>
> I know how to handle tuples via arrays, but for efficiency reasons I 
> certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 
> 1:N]...).
>
> Bill.
>



 -- 
 Erik Schnetter  
 http://www.perimeterinstitute.ca/personal/eschnetter/

>>>
>>>
>

Re: [julia-users] Adding tuples

2016-08-10 Thread 'Bill Hart' via julia-users
How about compared with:

ntuple(i -> a[i] + b[i], Val{N})


On 10 August 2016 at 20:32, Jeffrey Sarnoff 
wrote:

> Bill,
>
> Following Eric's note, I tried (with a,b equi-length tuples)
> function addTuples(a,b)
> ca = CartesianIndex(a)
> cb = CartesianIndex(b)
> return (ca+cb).I
> end
>
>
> for me, with 100 values it ran ~60% faster, and with 1000 values much much
> faster than
>  ntuple(i -> a[i] + b[i], N)
>
>
>
> On Wednesday, August 10, 2016 at 11:06:46 AM UTC-4, Bill Hart wrote:
>>
>> This code seems to be (about 50%) faster than recursive functions:
>>
>> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>>
>>
>> But this seems (about 50%) slower:
>>
>>  ((a[i] + b[i] for i = 1:N)...)
>>
>>
>> Anyway, I can use the first method, until I find something faster. It's
>> definitely way more convenient. Thanks.
>>
>> Bill.
>>
>>
>>
>> On 10 August 2016 at 16:56, Erik Schnetter  wrote:
>>
>>> The built-in type `CartesianIndex` supports adding and subtracting, and
>>> presumably also multiplication. It is implemented very efficiently, based
>>> on tuples.
>>>
>>> Otherwise, to generate efficient code, you might have to make use of
>>> "generated functions". These are similar to macros, but they know about the
>>> types upon which they act, and thus know the value of `N`. This is a bit
>>> low-level, so I'd use this only if (a) there is not other package
>>> available, and (b) you have examined Julia's performance and found it
>>> lacking.
>>>
>>> I would avoid overloading operators for `NTuple`, and instead us a new
>>> immutable type, since overloading operations for Julia's tuples can have
>>> unintended side effects.
>>>
>>> -erik
>>>
>>>
>>> On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
>>> julia...@googlegroups.com> wrote:
>>>
 Does anyone know an efficient way to add NTuples in Julia?

 I can do it using recursive functions, but for various reasons this is
 not efficient in my context. I really miss something like tuple(a[i] + b[i]
 for i in 1:N) to create the resulting tuple all in one go (here a and b
 would be tuples).

 The compiler doesn't do badly with recursive functions for handling
 tuples in very straightforward situations, but for example, if I want to
 create an immutable type based on a tuple the compiler doesn't seem to be
 able to handle the necessary optimisations. At least, that is what I infer
 from the timings. Consider

 immutable bill{N}
d::NTuple{N, Int}
 end

 and I want to add two bill's together. If I have to add the tuples
 themselves using recursive functions, then I no longer seem to be able to
 do something like:

 A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose
 elements are of type bill.

 I know how to handle tuples via arrays, but for efficiency reasons I
 certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 1:N]...).

 Bill.

>>>
>>>
>>>
>>> --
>>> Erik Schnetter  http://www.perimeterinstitute.
>>> ca/personal/eschnetter/
>>>
>>
>>


Re: [julia-users] Adding tuples

2016-08-10 Thread Jeffrey Sarnoff
Bill,

Following Eric's note, I tried (with a,b equi-length tuples)
function addTuples(a,b)
ca = CartesianIndex(a)
cb = CartesianIndex(b)
return (ca+cb).I
end


for me, with 100 values it ran ~60% faster, and with 1000 values much much 
faster than  
 ntuple(i -> a[i] + b[i], N)



On Wednesday, August 10, 2016 at 11:06:46 AM UTC-4, Bill Hart wrote:
>
> This code seems to be (about 50%) faster than recursive functions:
>
> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>
>
> But this seems (about 50%) slower:
>
>  ((a[i] + b[i] for i = 1:N)...)
>
>
> Anyway, I can use the first method, until I find something faster. It's 
> definitely way more convenient. Thanks.
>
> Bill. 
>
>
>
> On 10 August 2016 at 16:56, Erik Schnetter  > wrote:
>
>> The built-in type `CartesianIndex` supports adding and subtracting, and 
>> presumably also multiplication. It is implemented very efficiently, based 
>> on tuples.
>>
>> Otherwise, to generate efficient code, you might have to make use of 
>> "generated functions". These are similar to macros, but they know about the 
>> types upon which they act, and thus know the value of `N`. This is a bit 
>> low-level, so I'd use this only if (a) there is not other package 
>> available, and (b) you have examined Julia's performance and found it 
>> lacking.
>>
>> I would avoid overloading operators for `NTuple`, and instead us a new 
>> immutable type, since overloading operations for Julia's tuples can have 
>> unintended side effects.
>>
>> -erik
>>
>>
>> On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
>> julia...@googlegroups.com > wrote:
>>
>>> Does anyone know an efficient way to add NTuples in Julia?
>>>
>>> I can do it using recursive functions, but for various reasons this is 
>>> not efficient in my context. I really miss something like tuple(a[i] + b[i] 
>>> for i in 1:N) to create the resulting tuple all in one go (here a and b 
>>> would be tuples).
>>>
>>> The compiler doesn't do badly with recursive functions for handling 
>>> tuples in very straightforward situations, but for example, if I want to 
>>> create an immutable type based on a tuple the compiler doesn't seem to be 
>>> able to handle the necessary optimisations. At least, that is what I infer 
>>> from the timings. Consider
>>>
>>> immutable bill{N}
>>>d::NTuple{N, Int}
>>> end
>>>
>>> and I want to add two bill's together. If I have to add the tuples 
>>> themselves using recursive functions, then I no longer seem to be able to 
>>> do something like:
>>>
>>> A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose 
>>> elements are of type bill.
>>>
>>> I know how to handle tuples via arrays, but for efficiency reasons I 
>>> certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 1:N]...).
>>>
>>> Bill.
>>>
>>
>>
>>
>> -- 
>> Erik Schnetter  
>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>
>
>

Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Jameson
AFAIK, defining an arbitrary new type at runtime is impossible, sorry. In 
v0.4 it was allowed, because we hoped that people understood not to try. 
See also https://github.com/JuliaLang/julia/issues/16806. Note that it is 
insufficient to "handle" the repeat calling via caching in a Dict or 
similar such mechanism. It must always compute the exact final output from 
the input values alone (e.g. it must truly be const pure).

Being able to define types with arbitrary constraints in the type 
parameters works OK for toy demos, but it's intentionally rather difficult 
since it causes performance issues at scale. Operations on Array are likely 
to be much faster (including the allocation) than on Tuple (due to the cost 
of *not* allocating) unless that Tuple is very small.



On Wednesday, August 10, 2016 at 1:25:15 PM UTC-4, Erik Schnetter wrote:
>
> I want to create a type, and need more flexibility than Julia's `type` 
> definitions offer (see ). 
> Currently, I have a function that generates the type, and returns the type.
>
> I would like to make this a generated function (as it was in Julia 0.4). 
> The advantage is that this leads to type stability: The generated type only 
> depends on the types of the arguments pass to the function, and Julia would 
> be able to infer the type.
>
> In practice, this looks like
>
> using FastArrays
> # A (10x10) fixed-size arraytypealias Arr2d_10x10 FastArray(1:10, 1:10)
> a2 = Arr2d_10x10{Float64}(:,:)
>
>
> In principle I'd like to write `FastArray{1:10, 1:10}` (with curly 
> braces), but Julia doesn't offer sufficient flexibility for this. Hence I 
> use a regular function.
>
> To generate the type in the function I need to call `eval`. (Yes, I'm 
> aware that the function might be called multiple times, and I'm handling 
> this.)
>
> Do you have a suggestion for a different solution?
>
> -erik
>
>
> On Wed, Aug 10, 2016 at 11:51 AM, Jameson  wrote:
>
>> It is tracking the dynamic scope of the code generator, it doesn't care 
>> about what code you emit. The generator function must not cause any 
>> side-effects and must be entirely computed from the types of the inputs and 
>> not other global state. Over time, these conditions are likely to be more 
>> accurately enforced, as needed to make various optimizations reliable 
>> and/or correct.
>>
>>
>>
>> On Wednesday, August 10, 2016 at 10:48:31 AM UTC-4, Erik Schnetter wrote:
>>>
>>> I'm encountering the error "eval cannot be used in a generated function" 
>>> in Julia 0.5 for code that is working in Julia 0.4. My question is -- what 
>>> exactly is now disallowed? For example, if a generated function `f` calls 
>>> another (non-generated) function `g`, can `g` then call `eval`? Does the 
>>> word "in" here refer to the code that is generated by the generated 
>>> function, or does it refer to the dynamical scope of the code generation 
>>> state of the generated function?
>>>
>>> To avoid the error I have to redesign my code, and I'd like to know 
>>> ahead of time what to avoid. A Google search only turned up the C file 
>>> within Julia that emits the respective error message, as well as the Travis 
>>> build log for my package.
>>>
>>> -erik
>>>
>>> -- 
>>> Erik Schnetter  
>>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>>
>>
>
>
> -- 
> Erik Schnetter  
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Erik Schnetter
I want to create a type, and need more flexibility than Julia's `type`
definitions offer (see ).
Currently, I have a function that generates the type, and returns the type.

I would like to make this a generated function (as it was in Julia 0.4).
The advantage is that this leads to type stability: The generated type only
depends on the types of the arguments pass to the function, and Julia would
be able to infer the type.

In practice, this looks like

using FastArrays
# A (10x10) fixed-size arraytypealias Arr2d_10x10 FastArray(1:10, 1:10)
a2 = Arr2d_10x10{Float64}(:,:)


In principle I'd like to write `FastArray{1:10, 1:10}` (with curly braces),
but Julia doesn't offer sufficient flexibility for this. Hence I use a
regular function.

To generate the type in the function I need to call `eval`. (Yes, I'm aware
that the function might be called multiple times, and I'm handling this.)

Do you have a suggestion for a different solution?

-erik


On Wed, Aug 10, 2016 at 11:51 AM, Jameson  wrote:

> It is tracking the dynamic scope of the code generator, it doesn't care
> about what code you emit. The generator function must not cause any
> side-effects and must be entirely computed from the types of the inputs and
> not other global state. Over time, these conditions are likely to be more
> accurately enforced, as needed to make various optimizations reliable
> and/or correct.
>
>
>
> On Wednesday, August 10, 2016 at 10:48:31 AM UTC-4, Erik Schnetter wrote:
>>
>> I'm encountering the error "eval cannot be used in a generated function"
>> in Julia 0.5 for code that is working in Julia 0.4. My question is -- what
>> exactly is now disallowed? For example, if a generated function `f` calls
>> another (non-generated) function `g`, can `g` then call `eval`? Does the
>> word "in" here refer to the code that is generated by the generated
>> function, or does it refer to the dynamical scope of the code generation
>> state of the generated function?
>>
>> To avoid the error I have to redesign my code, and I'd like to know ahead
>> of time what to avoid. A Google search only turned up the C file within
>> Julia that emits the respective error message, as well as the Travis build
>> log for my package.
>>
>> -erik
>>
>> --
>> Erik Schnetter  http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>


-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Use reference of array comprehension internal variables?

2016-08-10 Thread Ismael Venegas Castelló
OMG that's awesome, we need more docs about this feature, thank you very 
much Stefan!

El miércoles, 10 de agosto de 2016, 12:02:34 (UTC-5), Stefan Karpinski 
escribió:
>
> julia> [(x, y, z) for x = 1:n for y = x:n for z = y:n if x^2 + y^2 == z^2]
> 6-element Array{Tuple{Int64,Int64,Int64},1}:
>  (3,4,5)
>  (5,12,13)
>  (6,8,10)
>  (8,15,17)
>  (9,12,15)
>  (12,16,20)
>
> On Wed, Aug 10, 2016 at 12:58 PM, Ismael Venegas Castelló <
> ismael...@gmail.com > wrote:
>
>> In order to be a little more specific I wanted to add, that it seems 
>> weird that I can use the variables for the if clause, but not for creating 
>> the other ranges, it's just that I don't know how to express myself 
>> correctly, I hope you can understand me.
>>
>>
>> El miércoles, 10 de agosto de 2016, 11:56:00 (UTC-5), Ismael Venegas 
>> Castelló escribió:
>>>
>>> Is there a way to make reference of the internal variables of an array 
>>> comprehension? I'm trying to improve this Rosetta Code task:
>>>
>>>
>>>- https://rosettacode.org/wiki/List_comprehensions#Julia
>>>
>>>
>>> const n = 20
>>> sort(filter(x -> x[1] < x[2] && x[1]^2 + x[2]^2 == x[3]^2, [(a, b, c) 
>>> for a=1:n, b=1:n, c=1:n]))
>>>
>>>
>>> In Python it's:
>>>
>>> In [2]: n = 20
>>>
>>> In [3]: [(x,y,z) for x in xrange(1,n+1) for y in xrange(x,n+1) for z in 
>>> xrange(y,n+1) if x**2 + y**2 == z**2]
>>> Out[3]: [(3, 4, 5), (5, 12, 13), (6, 8, 10), (8, 15, 17), (9, 12, 15), (
>>> 12, 16, 20)]
>>>
>>>
>>> I'll update the task with:
>>>
>>> julia> [(x, y, z) for x = 1:n, y = 1:n, z = 1:n if x < y && x^2 + y^2 == 
>>> z^2] |> sort
>>> 6-element Array{Tuple{Int64,Int64,Int64},1}:
>>>  (3,4,5)
>>>  (5,12,13)
>>>  (6,8,10)
>>>  (8,15,17)
>>>  (9,12,15)
>>>  (12,16,20)
>>>
>>> But I tried this and it doesn't work, I wonder why? 
>>>
>>> julia> [(x, y, z) for x = 1:n, y = x:n, z = y:n if x^2 + y^2 == z^2]
>>> ERROR: UndefVarError: x not defined
>>>
>>> Is there a way to do this? Thanks in advance!
>>>
>>
>

Re: [julia-users] Re: Use reference of array comprehension internal variables?

2016-08-10 Thread Stefan Karpinski
julia> [(x, y, z) for x = 1:n for y = x:n for z = y:n if x^2 + y^2 == z^2]
6-element Array{Tuple{Int64,Int64,Int64},1}:
 (3,4,5)
 (5,12,13)
 (6,8,10)
 (8,15,17)
 (9,12,15)
 (12,16,20)

On Wed, Aug 10, 2016 at 12:58 PM, Ismael Venegas Castelló <
ismael.vc1...@gmail.com> wrote:

> In order to be a little more specific I wanted to add, that it seems weird
> that I can use the variables for the if clause, but not for creating the
> other ranges, it's just that I don't know how to express myself correctly,
> I hope you can understand me.
>
>
> El miércoles, 10 de agosto de 2016, 11:56:00 (UTC-5), Ismael Venegas
> Castelló escribió:
>>
>> Is there a way to make reference of the internal variables of an array
>> comprehension? I'm trying to improve this Rosetta Code task:
>>
>>
>>- https://rosettacode.org/wiki/List_comprehensions#Julia
>>
>>
>> const n = 20
>> sort(filter(x -> x[1] < x[2] && x[1]^2 + x[2]^2 == x[3]^2, [(a, b, c) for
>> a=1:n, b=1:n, c=1:n]))
>>
>>
>> In Python it's:
>>
>> In [2]: n = 20
>>
>> In [3]: [(x,y,z) for x in xrange(1,n+1) for y in xrange(x,n+1) for z in
>> xrange(y,n+1) if x**2 + y**2 == z**2]
>> Out[3]: [(3, 4, 5), (5, 12, 13), (6, 8, 10), (8, 15, 17), (9, 12, 15), (
>> 12, 16, 20)]
>>
>>
>> I'll update the task with:
>>
>> julia> [(x, y, z) for x = 1:n, y = 1:n, z = 1:n if x < y && x^2 + y^2 ==
>> z^2] |> sort
>> 6-element Array{Tuple{Int64,Int64,Int64},1}:
>>  (3,4,5)
>>  (5,12,13)
>>  (6,8,10)
>>  (8,15,17)
>>  (9,12,15)
>>  (12,16,20)
>>
>> But I tried this and it doesn't work, I wonder why?
>>
>> julia> [(x, y, z) for x = 1:n, y = x:n, z = y:n if x^2 + y^2 == z^2]
>> ERROR: UndefVarError: x not defined
>>
>> Is there a way to do this? Thanks in advance!
>>
>


[julia-users] Re: Use reference of array comprehension internal variables?

2016-08-10 Thread Ismael Venegas Castelló
In order to be a little more specific I wanted to add, that it seems weird 
that I can use the variables for the if clause, but not for creating the 
other ranges, it's just that I don't know how to express myself correctly, 
I hope you can understand me.

El miércoles, 10 de agosto de 2016, 11:56:00 (UTC-5), Ismael Venegas 
Castelló escribió:
>
> Is there a way to make reference of the internal variables of an array 
> comprehension? I'm trying to improve this Rosetta Code task:
>
>
>- https://rosettacode.org/wiki/List_comprehensions#Julia
>
>
> const n = 20
> sort(filter(x -> x[1] < x[2] && x[1]^2 + x[2]^2 == x[3]^2, [(a, b, c) for 
> a=1:n, b=1:n, c=1:n]))
>
>
> In Python it's:
>
> In [2]: n = 20
>
> In [3]: [(x,y,z) for x in xrange(1,n+1) for y in xrange(x,n+1) for z in 
> xrange(y,n+1) if x**2 + y**2 == z**2]
> Out[3]: [(3, 4, 5), (5, 12, 13), (6, 8, 10), (8, 15, 17), (9, 12, 15), (12
> , 16, 20)]
>
>
> I'll update the task with:
>
> julia> [(x, y, z) for x = 1:n, y = 1:n, z = 1:n if x < y && x^2 + y^2 == z
> ^2] |> sort
> 6-element Array{Tuple{Int64,Int64,Int64},1}:
>  (3,4,5)
>  (5,12,13)
>  (6,8,10)
>  (8,15,17)
>  (9,12,15)
>  (12,16,20)
>
> But I tried this and it doesn't work, I wonder why? 
>
> julia> [(x, y, z) for x = 1:n, y = x:n, z = y:n if x^2 + y^2 == z^2]
> ERROR: UndefVarError: x not defined
>
> Is there a way to do this? Thanks in advance!
>


[julia-users] Use reference of array comprehension internal variables?

2016-08-10 Thread Ismael Venegas Castelló
Is there a way to make reference of the internal variables of an array 
comprehension? I'm trying to improve this Rosetta Code task:


   - https://rosettacode.org/wiki/List_comprehensions#Julia


const n = 20
sort(filter(x -> x[1] < x[2] && x[1]^2 + x[2]^2 == x[3]^2, [(a, b, c) for a=
1:n, b=1:n, c=1:n]))


In Python it's:

In [2]: n = 20

In [3]: [(x,y,z) for x in xrange(1,n+1) for y in xrange(x,n+1) for z in 
xrange(y,n+1) if x**2 + y**2 == z**2]
Out[3]: [(3, 4, 5), (5, 12, 13), (6, 8, 10), (8, 15, 17), (9, 12, 15), (12, 
16, 20)]


I'll update the task with:

julia> [(x, y, z) for x = 1:n, y = 1:n, z = 1:n if x < y && x^2 + y^2 == z^2
] |> sort
6-element Array{Tuple{Int64,Int64,Int64},1}:
 (3,4,5)
 (5,12,13)
 (6,8,10)
 (8,15,17)
 (9,12,15)
 (12,16,20)

But I tried this and it doesn't work, I wonder why? 

julia> [(x, y, z) for x = 1:n, y = x:n, z = y:n if x^2 + y^2 == z^2]
ERROR: UndefVarError: x not defined

Is there a way to do this? Thanks in advance!


[julia-users] Re: map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Gunnar Farnebäck
I realize the question is about the performance of the zip approach but may 
I suggest just running mean(v)?

On Wednesday, August 10, 2016 at 11:34:08 AM UTC+2, Tamas Papp wrote:
>
> I was benchmarking code for calculating means of vectors, eg 
>
> v = [rand(100) for i in 1:100] 
>
> m1(v) = map(mean, zip(v...)) 
> m2(v) = mapslices(mean, hcat(v...), 2) 
>
> @elapsed m1(v) 
> @elapsed m2(v) 
>
> m2 is faster, but for 1000 vectors, 
>
> v = [rand(100) for i in 1:1000] 
>
> m1 with zip takes "forever" (CPU keeps running at 100%, I interrupted 
> after a 10 minutes). Is this supposed to happen, or should I report it 
> as a bug? 
>
> Tamas 
>


Re: [julia-users] Adding tuples

2016-08-10 Thread 'Bill Hart' via julia-users
You are right. My previous timings were totally wrong anyway, due to a bug
I introduced into my program. After fixing it, a fair comparison shows that:

Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], Val{N})


is exactly the same speed as the recursive functions I was using. And it is
at least 20% faster in my application than the version with N instead of
Val{N}.

Unfortunately, in my application, the other solution:

((a[i] + b[i] for i = 1:N)...)


is extremely slow, and in fact Julia crashes with a segfault before my
benchmark finishes (due to using too much memory). So that isn't an option.

Anyway, the first solution above is as fast as what I had, and much more
convenient. I will go with that for now.

Bill.

On 10 August 2016 at 18:17, Shashi Gowda  wrote:

>
> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], Val{N})
>
> should be slightly faster and should not allocate unlike
>
> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>
>
> On Wed, Aug 10, 2016 at 8:36 PM, 'Bill Hart' via julia-users <
> julia-users@googlegroups.com> wrote:
>
>> This code seems to be (about 50%) faster than recursive functions:
>>
>> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>>
>>
>> But this seems (about 50%) slower:
>>
>>  ((a[i] + b[i] for i = 1:N)...)
>>
>>
>> Anyway, I can use the first method, until I find something faster. It's
>> definitely way more convenient. Thanks.
>>
>> Bill.
>>
>>
>>
>> On 10 August 2016 at 16:56, Erik Schnetter  wrote:
>>
>>> The built-in type `CartesianIndex` supports adding and subtracting, and
>>> presumably also multiplication. It is implemented very efficiently, based
>>> on tuples.
>>>
>>> Otherwise, to generate efficient code, you might have to make use of
>>> "generated functions". These are similar to macros, but they know about the
>>> types upon which they act, and thus know the value of `N`. This is a bit
>>> low-level, so I'd use this only if (a) there is not other package
>>> available, and (b) you have examined Julia's performance and found it
>>> lacking.
>>>
>>> I would avoid overloading operators for `NTuple`, and instead us a new
>>> immutable type, since overloading operations for Julia's tuples can have
>>> unintended side effects.
>>>
>>> -erik
>>>
>>>
>>> On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
>>> julia-users@googlegroups.com> wrote:
>>>
 Does anyone know an efficient way to add NTuples in Julia?

 I can do it using recursive functions, but for various reasons this is
 not efficient in my context. I really miss something like tuple(a[i] + b[i]
 for i in 1:N) to create the resulting tuple all in one go (here a and b
 would be tuples).

 The compiler doesn't do badly with recursive functions for handling
 tuples in very straightforward situations, but for example, if I want to
 create an immutable type based on a tuple the compiler doesn't seem to be
 able to handle the necessary optimisations. At least, that is what I infer
 from the timings. Consider

 immutable bill{N}
d::NTuple{N, Int}
 end

 and I want to add two bill's together. If I have to add the tuples
 themselves using recursive functions, then I no longer seem to be able to
 do something like:

 A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose
 elements are of type bill.

 I know how to handle tuples via arrays, but for efficiency reasons I
 certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 1:N]...).

 Bill.

>>>
>>>
>>>
>>> --
>>> Erik Schnetter  http://www.perimeterinstitute.
>>> ca/personal/eschnetter/
>>>
>>
>>
>


Re: [julia-users] Combining composite type & PyPlot methods with @manipulate (regarding a port/extension of QuTiP's Bloch Sphere)

2016-08-10 Thread Willem Hekman
Hi Tom, 

Plots.jl very interesting, love the idea. 

However, at first glance I cant find an example where @manipulate is used, 
could you provide me with one?

Btw I`ll watch your JuliaCon talk this evening, I`m very curious ;-)  

On Wednesday, August 10, 2016 at 5:09:05 PM UTC+2, Tom Breloff wrote:
>
> Hi Willem.  If you make a Plots recipe for your type, you should be able 
> to use your pseudocode by replacing 'render' with 'plot'.
>
> https://juliaplots.github.io/recipes/
>
> On Wed, Aug 10, 2016 at 10:18 AM, Willem Hekman  > wrote:
>
>> Hello all,
>>
>> I have been spending my time making 
>> http://qutip.org/docs/3.1.0/guide/guide-bloch.html work in Julia by 
>> translating the main parts of the source code 
>> http://qutip.org/docs/3.1.0/modules/qutip/bloch.html 
>> . 
>>
>> In short, what I've worked out is, in pseudo-code:
>>
>> type Bloch
>>some properties.. :: of various types
>>vectors ( :: Vector{Vector{Float64}}  ) # Vector of vectors that can 
>> be plot on the sphere.
>> end
>>
>> Bloch() = Bloch(standard properties, []) # initialize without any 
>> vectors to plot
>>
>> function add_vector(b::Bloch,vector::Vector{Float64})
>>push!(b.vectors,vector)
>> end
>>
>> function render(b::Bloch)
>>   plot a sphere
>>   plot equator
>>   plot x,y and z axis
>>   plot vectors
>>   style axes
>> end
>>
>> The actual code I've written so far can be found on 
>> https://github.com/whekman/edX/blob/master/Other/bloch.jl . Any advice 
>> on my code much appreciated, I`m quite new to programming.
>>
>> Now *I`d love to make such rendering compatible with @manipulate*. I 
>> know that you can use withfig(fig) do  end but somehow I cant figure 
>> out how to incorporate it in this more object style approach.
>>
>> Basically, I'm looking for a way to implement, in pseudo-code:
>>
>> b = Bloch()
>> @manipulate for azimuth 0:15:90, latitude -180:15:180
>>add_vector(b,azimuth,latitude)
>>render(b)
>> end
>>
>> So the goal is to have an easy way to draw points on such a sphere in an 
>> interactive way. Basically, I am having a hard time figuring out how to 
>> combine the use of such a composite type with @manipulate.
>>
>> Anyone know a solution?
>>
>> As an aside, the PyPlot code that I've written so far may be a nice, 
>> comprehensive example of 3D plotting using PyPlot . If so, how to make sure 
>> people can find it?
>>
>> Furthermore, any hopes of plotting Arrow3D objects from inside Julia? It 
>> is already possible in matplotlib 
>> http://stackoverflow.com/questions/29188612/arrows-in-matplotlib-using-mplot3d
>>
>> - Willem
>>
>
>

[julia-users] Re: Why is Julia 0.5 built from source almost twice as large (on disk) as Julia 0.4?

2016-08-10 Thread Jameson
Yes, you can delete anything old (the same goes for usr-staging and 
deps/srccache). I've also been slowly developing a PR that will allow the 
build system to automatically erase the build directories after its 
finished with them, but it's not ready yet. But someday...



On Wednesday, August 10, 2016 at 5:33:50 AM UTC-4, Tomas Lycken wrote:
>
> Thanks for the replies.
>
> Is it safe to assume that anything in deps/build that exists in multiple 
> versions, is only needed in the latest of those? For instance, I have 
>
> ```
> 159M deps/build/llvm-3.3
> 318M deps/build/llvm-3.7.1
> 881M deps/build/openblas-12ab1804b6ebcd38b26960d65d254314d8bc33d6
> 943M deps/build/openblas
> ```
>
> where it seems I could shave off a GB or so by deleting `llvm-3.3` and 
> `openblas-`. `deps/srccache` is another 650 MB, can that also be 
> deleted?
>
> My main reason for building from source rather than using the binaries is 
> that now and then I stumble on something I want to investigate and/or 
> improve in base, and the threshold for actually filing a PR is much lower 
> if I already have the code I'm running locally. Once 0.5 is out for real 
> I'll probably drop the source tree for 0.4, so "temporarily" freeing up a 
> couple of gigs by deleting build intermediates is good enough for now.
>
> // T
>
> On Wednesday, August 10, 2016 at 10:49:38 AM UTC+2, Andreas Lobinger wrote:
>>
>> Hello colleague,
>>
>> On Wednesday, August 10, 2016 at 10:11:46 AM UTC+2, Tomas Lycken wrote:
>>>
>>> Both instances of Julia are runnable, so I don’t think I deleted 
>>> something I shouldn’t have in either folder. 
>>>
>>> What has changed to make Julia 0.5 so big? Are there any build artifacts 
>>> I can/should prune to reduce this footprint?
>>>
>> my guess it's some cumulative build artefacts:
>>
>>  lobi@orange4:~/julia05/deps$ du -sh *
>> 8,0Karpack.mk
>> 12K blas.mk
>> 4,8Gbuild
>> 508Kchecksums
>> 4,0Kdsfmt.mk
>> 8,0Kfftw.mk
>> 4,0Kgfortblas.alias
>> 8,0Kgfortblas.c
>> 4,0Kgmp.mk
>> 4,0KlibdSFMT.def
>> 4,0Klibgit2.mk
>> 4,0Klibgit2.version
>> 4,0Klibssh2.mk
>> 4,0Klibssh2.version
>> 4,0Klibuv.mk
>> 4,0Klibuv.version
>> 20K llvm.mk
>> 4,0Kllvm-ver.make
>> 8,0KMakefile
>> 4,0Kmbedtls.mk
>> 4,0Kmpfr.mk
>> 4,0KNATIVE.cmake
>> 4,0Kobjconv.mk
>> 4,0Kopenblas.version
>> 4,0Kopenlibm.mk
>> 4,0Kopenlibm.version
>> 4,0Kopenspecfun.mk
>> 4,0Kopenspecfun.version
>> 4,0Kpatchelf.mk
>> 328Kpatches
>> 4,0Kpcre.mk
>> 2,0Gsrccache
>> 8,0Ksuitesparse.mk
>> 4,0KSuiteSparse_wrapper.c
>> 20K tools
>> 4,0Kunwind.mk
>> 4,0Kutf8proc.mk
>> 4,0Kutf8proc.version
>> 384Kvalgrind
>> 4,0KVersions.make
>> 4,0Kvirtualenv.mk
>>
>>

Re: [julia-users] Plots with scale bars

2016-08-10 Thread Tom Breloff
Sure. You could maybe make a recipe, or just make a function that does
this. You'll probably want to make use of 'axis_limits' to get the data
range. Like this:

amin, amax = axis_limits(plt[1][:yaxis])

Here 'plt[1]' returns the first Subplot.

On Wednesday, August 10, 2016, Islam Badreldin 
wrote:

> Hi Tom,
>
> Thanks for your input. This solution seems similar to the one proposed on
> matlabcentral. It's fine for now, but it'd be great if switching between a
> plot with x-y axes to a plot with only scale bars can be done neatly using
> a single switch or function, while hiding all the details from the end
> user. For example, figuring out the correct parameters for the text
> annotation is a function of current plot scale (min/max), the text size of
> the displayed text, ... etc. Ideally, the end user shouldn't need to fiddle
> with all these details, and the scale bars functionality should `just work'
> as the x-y axes functionality does!
>
> Thanks,
> Islam
>
> PS: the first plot command gives me `ERROR: Unknown key: axis`. But I
> assume that is because I'm currently on Julia v0.4.6 with Plots v0.8.2.
>
> On Wednesday, August 10, 2016 at 10:55:10 AM UTC-4, Tom Breloff wrote:
>
>> I threw this together really quickly... excuse the mess:
>>
>> using Plots
>>
>> # initial plot
>> plot(linspace(10,34,100), 30rand(100,2).+[0 60],
>>  leg=false, ylim=(-60,150), axis=nothing,
>>  xguide="Time(ms)", yguide="current(mA)")
>>
>> # lines and annotations
>> x,y = 12,-40
>> plot!([x,x,x+10], [y+10,y,y], c=:black,
>>ann=[(x+5,y,text("10 ms",:top,10)),
>> (x,y+5,text("10 mA",:right,10))])
>>
>> png("/tmp/tmp")
>>
>>
>>
>> ​
>>
>> On Wed, Aug 10, 2016 at 10:31 AM, Willem Hekman 
>> wrote:
>>
>>> If you'd try to do this using PyPlot you can remove the x- and y-axis by
>>> translating the necesarry matplotlib code i.e.
>>> http://www.shocksolution.com/2011/08/removing-an-axis-or-bot
>>> h-axes-from-a-matplotlib-plot/
>>> to the syntax that PyPlot takes, which is slightly different. You would
>>> get something like
>>>
>>> plot(x,y) # some 2D plot call
>>> ax = gca() # get current axes (the drawing area in PyPlot, dont confuse
>>> axes with axis here)
>>> ax[:get_xaxis][:set_visible](false) # make x-axis invisible
>>>
>>> for inspiration you can have a look at https://gist.github.com/gizmaa
>>> /7214002
>>>
>>> Adding the scale bars I cant seem to come up with a solution straight
>>> away unfortunately, anyone has an idea?
>>>
>>>
>>> On Monday, August 8, 2016 at 6:33:14 PM UTC+2, Islam Badreldin wrote:



 Hello,

 Is there a simple way in Julia to add scale bars with labels to plots
 and to hide the x-y axes as well? The way to do in MATLAB involves a lot of
 manual tweaking as described here
 http://www.mathworks.com/matlabcentral/answers/151248-add-a-
 scale-bar-to-my-plot

 I'm hoping to find a more elegant way in Julia!

 Thanks,
 Islam

>>>
>>


Re: [julia-users] Adding tuples

2016-08-10 Thread Shashi Gowda
Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], Val{N})

should be slightly faster and should not allocate unlike

Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)


On Wed, Aug 10, 2016 at 8:36 PM, 'Bill Hart' via julia-users <
julia-users@googlegroups.com> wrote:

> This code seems to be (about 50%) faster than recursive functions:
>
> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>
>
> But this seems (about 50%) slower:
>
>  ((a[i] + b[i] for i = 1:N)...)
>
>
> Anyway, I can use the first method, until I find something faster. It's
> definitely way more convenient. Thanks.
>
> Bill.
>
>
>
> On 10 August 2016 at 16:56, Erik Schnetter  wrote:
>
>> The built-in type `CartesianIndex` supports adding and subtracting, and
>> presumably also multiplication. It is implemented very efficiently, based
>> on tuples.
>>
>> Otherwise, to generate efficient code, you might have to make use of
>> "generated functions". These are similar to macros, but they know about the
>> types upon which they act, and thus know the value of `N`. This is a bit
>> low-level, so I'd use this only if (a) there is not other package
>> available, and (b) you have examined Julia's performance and found it
>> lacking.
>>
>> I would avoid overloading operators for `NTuple`, and instead us a new
>> immutable type, since overloading operations for Julia's tuples can have
>> unintended side effects.
>>
>> -erik
>>
>>
>> On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
>> julia-users@googlegroups.com> wrote:
>>
>>> Does anyone know an efficient way to add NTuples in Julia?
>>>
>>> I can do it using recursive functions, but for various reasons this is
>>> not efficient in my context. I really miss something like tuple(a[i] + b[i]
>>> for i in 1:N) to create the resulting tuple all in one go (here a and b
>>> would be tuples).
>>>
>>> The compiler doesn't do badly with recursive functions for handling
>>> tuples in very straightforward situations, but for example, if I want to
>>> create an immutable type based on a tuple the compiler doesn't seem to be
>>> able to handle the necessary optimisations. At least, that is what I infer
>>> from the timings. Consider
>>>
>>> immutable bill{N}
>>>d::NTuple{N, Int}
>>> end
>>>
>>> and I want to add two bill's together. If I have to add the tuples
>>> themselves using recursive functions, then I no longer seem to be able to
>>> do something like:
>>>
>>> A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose
>>> elements are of type bill.
>>>
>>> I know how to handle tuples via arrays, but for efficiency reasons I
>>> certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 1:N]...).
>>>
>>> Bill.
>>>
>>
>>
>>
>> --
>> Erik Schnetter  http://www.perimeterinstitute.
>> ca/personal/eschnetter/
>>
>
>


[julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Jameson
It is tracking the dynamic scope of the code generator, it doesn't care 
about what code you emit. The generator function must not cause any 
side-effects and must be entirely computed from the types of the inputs and 
not other global state. Over time, these conditions are likely to be more 
accurately enforced, as needed to make various optimizations reliable 
and/or correct.



On Wednesday, August 10, 2016 at 10:48:31 AM UTC-4, Erik Schnetter wrote:
>
> I'm encountering the error "eval cannot be used in a generated function" 
> in Julia 0.5 for code that is working in Julia 0.4. My question is -- what 
> exactly is now disallowed? For example, if a generated function `f` calls 
> another (non-generated) function `g`, can `g` then call `eval`? Does the 
> word "in" here refer to the code that is generated by the generated 
> function, or does it refer to the dynamical scope of the code generation 
> state of the generated function?
>
> To avoid the error I have to redesign my code, and I'd like to know ahead 
> of time what to avoid. A Google search only turned up the C file within 
> Julia that emits the respective error message, as well as the Travis build 
> log for my package.
>
> -erik
>
> -- 
> Erik Schnetter  
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


[julia-users] Re: Permissions in my 'usr' don't look right

2016-08-10 Thread Jameson
The build should be fine after  `chown -R`. I suspect you accidentally 
typed `make` in a root terminal you had open, since there's no other way 
linux would have let you chown them to root. Unless you prefer to suspect 
an alien virus is altering the permissions bits on your hard drive using 
their femtoray field generator :P


On Wednesday, August 10, 2016 at 5:37:55 AM UTC-4, Colin Beckingham wrote:
>
> If I change the dirs julia/usr and julia/usr-staging with a chmod 
> recursively to me as a user, then make completes and make testall is 
> successful. This is a complete shot in the dark, perhaps someone could let 
> me know if I should reinstall from scratch.
> I believe this happened in the last 24 hours, yesterday the make ran fine.
>
> On Wednesday, 10 August 2016 01:37:47 UTC-4, Colin Beckingham wrote:
>>
>> The latest 0.6 is failing to build on my openSUSE system. Specifically it 
>> keeps falling over when trying to handle the files and symlinks in usr/lib 
>> for libopenlibm.* that cannot be changed. Permissions on these files seem 
>> to be ok, but the permissions in the lib directory don't look right. I have 
>> a bunch of stuff owned by root for some reason, specifically
>> /julia> ls -l usr
>> total 24
>> drwxr-xr-x 2 colin users 4096 Aug 10 01:14 bin
>> drwxr-xr-x 3 colin users   19 Nov 25  2015 etc
>> drwxr-xr-x 8 root  root  4096 Aug  9 02:15 include
>> drwxr-xr-x 5 root  root  8192 Aug 10 01:22 lib
>> drwxr-xr-x 2 colin users6 Nov 25  2015 libexec
>> drwxr-xr-x 8 root  root80 Aug  9 02:15 share
>> drwxr-xr-x 2 root  root  4096 Aug  9 02:15 tools
>> I can't recall having switched to root to do any installs, but it is 
>> possible I did so and the memory faded.
>>
>

Re: [julia-users] Re: Plots with scale bars

2016-08-10 Thread Islam Badreldin
Hi Tom,

Thanks for your input. This solution seems similar to the one proposed on 
matlabcentral. It's fine for now, but it'd be great if switching between a 
plot with x-y axes to a plot with only scale bars can be done neatly using 
a single switch or function, while hiding all the details from the end 
user. For example, figuring out the correct parameters for the text 
annotation is a function of current plot scale (min/max), the text size of 
the displayed text, ... etc. Ideally, the end user shouldn't need to fiddle 
with all these details, and the scale bars functionality should `just work' 
as the x-y axes functionality does!

Thanks,
Islam

PS: the first plot command gives me `ERROR: Unknown key: axis`. But I 
assume that is because I'm currently on Julia v0.4.6 with Plots v0.8.2.

On Wednesday, August 10, 2016 at 10:55:10 AM UTC-4, Tom Breloff wrote:

> I threw this together really quickly... excuse the mess:
>
> using Plots
>
> # initial plot
> plot(linspace(10,34,100), 30rand(100,2).+[0 60],
>  leg=false, ylim=(-60,150), axis=nothing,
>  xguide="Time(ms)", yguide="current(mA)")
>
> # lines and annotations
> x,y = 12,-40
> plot!([x,x,x+10], [y+10,y,y], c=:black,
>ann=[(x+5,y,text("10 ms",:top,10)),
> (x,y+5,text("10 mA",:right,10))])
>
> png("/tmp/tmp")
>
>
>
> ​
>
> On Wed, Aug 10, 2016 at 10:31 AM, Willem Hekman  > wrote:
>
>> If you'd try to do this using PyPlot you can remove the x- and y-axis by 
>> translating the necesarry matplotlib code i.e. 
>> http://www.shocksolution.com/2011/08/removing-an-axis-or-both-axes-from-a-matplotlib-plot/
>> to the syntax that PyPlot takes, which is slightly different. You would 
>> get something like
>>
>> plot(x,y) # some 2D plot call
>> ax = gca() # get current axes (the drawing area in PyPlot, dont confuse 
>> axes with axis here)
>> ax[:get_xaxis][:set_visible](false) # make x-axis invisible
>>
>> for inspiration you can have a look at 
>> https://gist.github.com/gizmaa/7214002
>>
>> Adding the scale bars I cant seem to come up with a solution straight 
>> away unfortunately, anyone has an idea?
>>
>>
>> On Monday, August 8, 2016 at 6:33:14 PM UTC+2, Islam Badreldin wrote:
>>>
>>>
>>>
>>> Hello,
>>>
>>> Is there a simple way in Julia to add scale bars with labels to plots 
>>> and to hide the x-y axes as well? The way to do in MATLAB involves a lot of 
>>> manual tweaking as described here
>>>
>>> http://www.mathworks.com/matlabcentral/answers/151248-add-a-scale-bar-to-my-plot
>>>
>>> I'm hoping to find a more elegant way in Julia!
>>>
>>> Thanks,
>>> Islam
>>>
>>
>

[julia-users] Re: Issue with ccall

2016-08-10 Thread Jameson
The arguments don't get written in a tuple. This should work:

 ccall(("__a1_MOD_teste","./a1.o"), Void, (Int64,), b)


On Wednesday, August 10, 2016 at 11:09:32 AM UTC-4, Abimael Jr wrote:
>
>
> Hello Julia's fans
>
> I am starting studing Julia. As I have some code in Fortran, I really 
> interested in the ccall function to call some specialized code in Fortran.
>
> I wrote a simple function inside a module to do some tests passing a 
> parameter to receive a value from a Fortran value.
>
> The Fortran Code is on the file a1.f90 attached 
>
> I compile using gfortran : 
>
> gfortran a1.f90  -o a1.o  -fPIC  -shared 
>
> For the tests, I created a very simple Julia file test.jl, attached. 
> the module a1.o is on the same directory of test.jl. 
>
> I invoke using   julia test.jl and got error message :
>
>  julia test.jl 
> +-+ 
>  
> Teste 2: chamando subrotina teste recebendo o resultado no parâmetro b : 
> ERROR: LoadError: MethodError: `convert` has no method matching 
> convert(::Type{Int64}, ::Tuple{Int64})
> This may have arisen from a call to the constructor Int64(...),
> since type constructors fall back to convert methods.
> Closest candidates are:
>   call{T}(::Type{T}, ::Any)
>   convert(::Type{Int64}, !Matched::Int8)
>   convert(::Type{Int64}, !Matched::UInt8)
>   ...
>  in anonymous at no file
>  in include at ./boot.jl:261
>  in include_from_node1 at ./loading.jl:320
>  in process_options at ./client.jl:280
>  in _start at ./client.jl:378
> while loading 
> /discolocal/abimael/simulacoes/testes/julia/workspace/fortran/test.jl, in 
> expression starting on line 6
>
>
> I write this post asking you for help because I did several changes, 
> trying to figure out what is wrong, but unfortunely I am not able to find a 
> solution.
> I had changed the type of the parameter in the fortran code, but it does 
> not work. 
>
> It seems that I am passing parameter wrongly, but by the information of 
> ccall, I should pass using tuples. 
>
> So,please where am I wrong or doing wrong ?
> Thanks in advance 
>
>
>
>
>
>
>

[julia-users] Re: map(mean, zip(v...)) doesn't scale

2016-08-10 Thread David Gold
I want to say it's the iteration over the large ZipIterator that's giving 
you grief. Possibly relevant (since n-arg map implementation falls back on 
iterating over zipped argument arrays): 
https://github.com/JuliaLang/julia/issues/17321 

On Wednesday, August 10, 2016 at 5:34:08 AM UTC-4, Tamas Papp wrote:
>
> I was benchmarking code for calculating means of vectors, eg 
>
> v = [rand(100) for i in 1:100] 
>
> m1(v) = map(mean, zip(v...)) 
> m2(v) = mapslices(mean, hcat(v...), 2) 
>
> @elapsed m1(v) 
> @elapsed m2(v) 
>
> m2 is faster, but for 1000 vectors, 
>
> v = [rand(100) for i in 1:1000] 
>
> m1 with zip takes "forever" (CPU keeps running at 100%, I interrupted 
> after a 10 minutes). Is this supposed to happen, or should I report it 
> as a bug? 
>
> Tamas 
>


[julia-users] Issue with ccall

2016-08-10 Thread Abimael Jr

Hello Julia's fans

I am starting studing Julia. As I have some code in Fortran, I really 
interested in the ccall function to call some specialized code in Fortran.

I wrote a simple function inside a module to do some tests passing a 
parameter to receive a value from a Fortran value.

The Fortran Code is on the file a1.f90 attached 

I compile using gfortran : 

gfortran a1.f90  -o a1.o  -fPIC  -shared 

For the tests, I created a very simple Julia file test.jl, attached. 
the module a1.o is on the same directory of test.jl. 

I invoke using   julia test.jl and got error message :

 julia test.jl 
+-+ 
 
Teste 2: chamando subrotina teste recebendo o resultado no parâmetro b : 
ERROR: LoadError: MethodError: `convert` has no method matching 
convert(::Type{Int64}, ::Tuple{Int64})
This may have arisen from a call to the constructor Int64(...),
since type constructors fall back to convert methods.
Closest candidates are:
  call{T}(::Type{T}, ::Any)
  convert(::Type{Int64}, !Matched::Int8)
  convert(::Type{Int64}, !Matched::UInt8)
  ...
 in anonymous at no file
 in include at ./boot.jl:261
 in include_from_node1 at ./loading.jl:320
 in process_options at ./client.jl:280
 in _start at ./client.jl:378
while loading 
/discolocal/abimael/simulacoes/testes/julia/workspace/fortran/test.jl, in 
expression starting on line 6


I write this post asking you for help because I did several changes, trying 
to figure out what is wrong, but unfortunely I am not able to find a 
solution.
I had changed the type of the parameter in the fortran code, but it does 
not work. 

It seems that I am passing parameter wrongly, but by the information of 
ccall, I should pass using tuples. 

So,please where am I wrong or doing wrong ?
Thanks in advance 








a1.f90
Description: Binary data


test.jl
Description: Binary data


Re: [julia-users] Combining composite type & PyPlot methods with @manipulate (regarding a port/extension of QuTiP's Bloch Sphere)

2016-08-10 Thread Tom Breloff
Hi Willem.  If you make a Plots recipe for your type, you should be able to
use your pseudocode by replacing 'render' with 'plot'.

https://juliaplots.github.io/recipes/

On Wed, Aug 10, 2016 at 10:18 AM, Willem Hekman 
wrote:

> Hello all,
>
> I have been spending my time making http://qutip.org/docs/3.1.0/
> guide/guide-bloch.html work in Julia by translating the main parts of the
> source code http://qutip.org/docs/3.1.0/modules/qutip/bloch.html .
> 
>
> In short, what I've worked out is, in pseudo-code:
>
> type Bloch
>some properties.. :: of various types
>vectors ( :: Vector{Vector{Float64}}  ) # Vector of vectors that can
> be plot on the sphere.
> end
>
> Bloch() = Bloch(standard properties, []) # initialize without any vectors
> to plot
>
> function add_vector(b::Bloch,vector::Vector{Float64})
>push!(b.vectors,vector)
> end
>
> function render(b::Bloch)
>   plot a sphere
>   plot equator
>   plot x,y and z axis
>   plot vectors
>   style axes
> end
>
> The actual code I've written so far can be found on
> https://github.com/whekman/edX/blob/master/Other/bloch.jl . Any advice on
> my code much appreciated, I`m quite new to programming.
>
> Now *I`d love to make such rendering compatible with @manipulate*. I know
> that you can use withfig(fig) do  end but somehow I cant figure out how
> to incorporate it in this more object style approach.
>
> Basically, I'm looking for a way to implement, in pseudo-code:
>
> b = Bloch()
> @manipulate for azimuth 0:15:90, latitude -180:15:180
>add_vector(b,azimuth,latitude)
>render(b)
> end
>
> So the goal is to have an easy way to draw points on such a sphere in an
> interactive way. Basically, I am having a hard time figuring out how to
> combine the use of such a composite type with @manipulate.
>
> Anyone know a solution?
>
> As an aside, the PyPlot code that I've written so far may be a nice,
> comprehensive example of 3D plotting using PyPlot . If so, how to make sure
> people can find it?
>
> Furthermore, any hopes of plotting Arrow3D objects from inside Julia? It
> is already possible in matplotlib http://stackoverflow.com/
> questions/29188612/arrows-in-matplotlib-using-mplot3d
>
> - Willem
>


Re: [julia-users] Adding tuples

2016-08-10 Thread 'Bill Hart' via julia-users
This code seems to be (about 50%) faster than recursive functions:

Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)


But this seems (about 50%) slower:

 ((a[i] + b[i] for i = 1:N)...)


Anyway, I can use the first method, until I find something faster. It's
definitely way more convenient. Thanks.

Bill.



On 10 August 2016 at 16:56, Erik Schnetter  wrote:

> The built-in type `CartesianIndex` supports adding and subtracting, and
> presumably also multiplication. It is implemented very efficiently, based
> on tuples.
>
> Otherwise, to generate efficient code, you might have to make use of
> "generated functions". These are similar to macros, but they know about the
> types upon which they act, and thus know the value of `N`. This is a bit
> low-level, so I'd use this only if (a) there is not other package
> available, and (b) you have examined Julia's performance and found it
> lacking.
>
> I would avoid overloading operators for `NTuple`, and instead us a new
> immutable type, since overloading operations for Julia's tuples can have
> unintended side effects.
>
> -erik
>
>
> On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
> julia-users@googlegroups.com> wrote:
>
>> Does anyone know an efficient way to add NTuples in Julia?
>>
>> I can do it using recursive functions, but for various reasons this is
>> not efficient in my context. I really miss something like tuple(a[i] + b[i]
>> for i in 1:N) to create the resulting tuple all in one go (here a and b
>> would be tuples).
>>
>> The compiler doesn't do badly with recursive functions for handling
>> tuples in very straightforward situations, but for example, if I want to
>> create an immutable type based on a tuple the compiler doesn't seem to be
>> able to handle the necessary optimisations. At least, that is what I infer
>> from the timings. Consider
>>
>> immutable bill{N}
>>d::NTuple{N, Int}
>> end
>>
>> and I want to add two bill's together. If I have to add the tuples
>> themselves using recursive functions, then I no longer seem to be able to
>> do something like:
>>
>> A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose
>> elements are of type bill.
>>
>> I know how to handle tuples via arrays, but for efficiency reasons I
>> certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 1:N]...).
>>
>> Bill.
>>
>
>
>
> --
> Erik Schnetter  http://www.perimeterinstitute.
> ca/personal/eschnetter/
>


Re: [julia-users] Adding tuples

2016-08-10 Thread Erik Schnetter
The built-in type `CartesianIndex` supports adding and subtracting, and
presumably also multiplication. It is implemented very efficiently, based
on tuples.

Otherwise, to generate efficient code, you might have to make use of
"generated functions". These are similar to macros, but they know about the
types upon which they act, and thus know the value of `N`. This is a bit
low-level, so I'd use this only if (a) there is not other package
available, and (b) you have examined Julia's performance and found it
lacking.

I would avoid overloading operators for `NTuple`, and instead us a new
immutable type, since overloading operations for Julia's tuples can have
unintended side effects.

-erik


On Wed, Aug 10, 2016 at 9:57 AM, 'Bill Hart' via julia-users <
julia-users@googlegroups.com> wrote:

> Does anyone know an efficient way to add NTuples in Julia?
>
> I can do it using recursive functions, but for various reasons this is not
> efficient in my context. I really miss something like tuple(a[i] + b[i] for
> i in 1:N) to create the resulting tuple all in one go (here a and b would
> be tuples).
>
> The compiler doesn't do badly with recursive functions for handling tuples
> in very straightforward situations, but for example, if I want to create an
> immutable type based on a tuple the compiler doesn't seem to be able to
> handle the necessary optimisations. At least, that is what I infer from the
> timings. Consider
>
> immutable bill{N}
>d::NTuple{N, Int}
> end
>
> and I want to add two bill's together. If I have to add the tuples
> themselves using recursive functions, then I no longer seem to be able to
> do something like:
>
> A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose elements
> are of type bill.
>
> I know how to handle tuples via arrays, but for efficiency reasons I
> certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 1:N]...).
>
> Bill.
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] Re: Adding tuples

2016-08-10 Thread 'Bill Hart' via julia-users
Oh, I didn't know about that either. This is also what I'm looking for.

When I looked through tuple.jl for inspiration and through the
documentation I just didn't see these (probably my fault for not looking
carefully enough).

Bill.

On 10 August 2016 at 16:53, Pablo Zubieta  wrote:

> And just to throw out another option, you might also consider
>
> ((a[i] + b[i] for i = 1:N)...)
>
> Pablo.
>


Re: [julia-users] Re: Plots with scale bars

2016-08-10 Thread Tom Breloff
I threw this together really quickly... excuse the mess:

using Plots

# initial plot
plot(linspace(10,34,100), 30rand(100,2).+[0 60],
 leg=false, ylim=(-60,150), axis=nothing,
 xguide="Time(ms)", yguide="current(mA)")

# lines and annotations
x,y = 12,-40
plot!([x,x,x+10], [y+10,y,y], c=:black,
   ann=[(x+5,y,text("10 ms",:top,10)),
(x,y+5,text("10 mA",:right,10))])

png("/tmp/tmp")



​

On Wed, Aug 10, 2016 at 10:31 AM, Willem Hekman 
wrote:

> If you'd try to do this using PyPlot you can remove the x- and y-axis by
> translating the necesarry matplotlib code i.e.
> http://www.shocksolution.com/2011/08/removing-an-axis-or-
> both-axes-from-a-matplotlib-plot/
> to the syntax that PyPlot takes, which is slightly different. You would
> get something like
>
> plot(x,y) # some 2D plot call
> ax = gca() # get current axes (the drawing area in PyPlot, dont confuse
> axes with axis here)
> ax[:get_xaxis][:set_visible](false) # make x-axis invisible
>
> for inspiration you can have a look at https://gist.github.com/
> gizmaa/7214002
>
> Adding the scale bars I cant seem to come up with a solution straight away
> unfortunately, anyone has an idea?
>
>
> On Monday, August 8, 2016 at 6:33:14 PM UTC+2, Islam Badreldin wrote:
>>
>>
>>
>> Hello,
>>
>> Is there a simple way in Julia to add scale bars with labels to plots and
>> to hide the x-y axes as well? The way to do in MATLAB involves a lot of
>> manual tweaking as described here
>> http://www.mathworks.com/matlabcentral/answers/151248-add-a-
>> scale-bar-to-my-plot
>>
>> I'm hoping to find a more elegant way in Julia!
>>
>> Thanks,
>> Islam
>>
>


[julia-users] "eval cannot be used in a generated function"

2016-08-10 Thread Erik Schnetter
I'm encountering the error "eval cannot be used in a generated function" in
Julia 0.5 for code that is working in Julia 0.4. My question is -- what
exactly is now disallowed? For example, if a generated function `f` calls
another (non-generated) function `g`, can `g` then call `eval`? Does the
word "in" here refer to the code that is generated by the generated
function, or does it refer to the dynamical scope of the code generation
state of the generated function?

To avoid the error I have to redesign my code, and I'd like to know ahead
of time what to avoid. A Google search only turned up the C file within
Julia that emits the respective error message, as well as the Travis build
log for my package.

-erik

-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


[julia-users] Re: Adding tuples

2016-08-10 Thread Pablo Zubieta
And just to throw out another option, you might also consider

((a[i] + b[i] for i = 1:N)...)

Pablo.


Re: [julia-users] Re: Adding tuples

2016-08-10 Thread 'Bill Hart' via julia-users
map is incredibly slow and not at all useful for something like addition.
However the first example looks like what I am looking for, depending on
how it is implemented.

Thanks.

Bill.

On 10 August 2016 at 16:45, Pablo Zubieta  wrote:

> Does something like this seems good enough?
>
> Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)
>
> there is also
>
> map(+, a, b)
>
> where `a`, and `b` are the tuples you want to sum elementwise.
>
> There is a chance that eventually one will be able to also use broadcast
> or elementwise addition .+ for doing this.
>


[julia-users] Re: Adding tuples

2016-08-10 Thread Pablo Zubieta
Does something like this seems good enough?

Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)

there is also

map(+, a, b)

where `a`, and `b` are the tuples you want to sum elementwise.

There is a chance that eventually one will be able to also use broadcast or 
elementwise addition .+ for doing this.


Re: [julia-users] Re: map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Yichao Yu
On Wed, Aug 10, 2016 at 6:29 PM, Andreas Lobinger 
wrote:

> There is a memory allocation(?) bug in 0.5.0-rc1+0,
>

The bug is not related to memory allocation is is likely unrelated to this
either.


> recommendation is to go to rc1+1.
>
> On Wednesday, August 10, 2016 at 12:04:36 PM UTC+2, Tamas Papp wrote:
>>
>> Thanks. Forgot to say that I am using v"0.5.0-rc1+0".
>>
>>


[julia-users] Re: Plots with scale bars

2016-08-10 Thread Willem Hekman
If you'd try to do this using PyPlot you can remove the x- and y-axis by 
translating the necesarry matplotlib code i.e. 
http://www.shocksolution.com/2011/08/removing-an-axis-or-both-axes-from-a-matplotlib-plot/
to the syntax that PyPlot takes, which is slightly different. You would get 
something like

plot(x,y) # some 2D plot call
ax = gca() # get current axes (the drawing area in PyPlot, dont confuse 
axes with axis here)
ax[:get_xaxis][:set_visible](false) # make x-axis invisible

for inspiration you can have a look at 
https://gist.github.com/gizmaa/7214002

Adding the scale bars I cant seem to come up with a solution straight away 
unfortunately, anyone has an idea?

On Monday, August 8, 2016 at 6:33:14 PM UTC+2, Islam Badreldin wrote:
>
>
>
> Hello,
>
> Is there a simple way in Julia to add scale bars with labels to plots and 
> to hide the x-y axes as well? The way to do in MATLAB involves a lot of 
> manual tweaking as described here
>
> http://www.mathworks.com/matlabcentral/answers/151248-add-a-scale-bar-to-my-plot
>
> I'm hoping to find a more elegant way in Julia!
>
> Thanks,
> Islam
>


[julia-users] Combining composite type & PyPlot methods with @manipulate (regarding a port/extension of QuTiP's Bloch Sphere)

2016-08-10 Thread Willem Hekman
Hello all,

I have been spending my time making 
http://qutip.org/docs/3.1.0/guide/guide-bloch.html work in Julia by 
translating the main parts of the source code 
http://qutip.org/docs/3.1.0/modules/qutip/bloch.html 
. 

In short, what I've worked out is, in pseudo-code:

type Bloch
   some properties.. :: of various types
   vectors ( :: Vector{Vector{Float64}}  ) # Vector of vectors that can be 
plot on the sphere.
end

Bloch() = Bloch(standard properties, []) # initialize without any vectors 
to plot

function add_vector(b::Bloch,vector::Vector{Float64})
   push!(b.vectors,vector)
end

function render(b::Bloch)
  plot a sphere
  plot equator
  plot x,y and z axis
  plot vectors
  style axes
end

The actual code I've written so far can be found on 
https://github.com/whekman/edX/blob/master/Other/bloch.jl . Any advice on 
my code much appreciated, I`m quite new to programming.

Now *I`d love to make such rendering compatible with @manipulate*. I know 
that you can use withfig(fig) do  end but somehow I cant figure out how 
to incorporate it in this more object style approach.

Basically, I'm looking for a way to implement, in pseudo-code:

b = Bloch()
@manipulate for azimuth 0:15:90, latitude -180:15:180
   add_vector(b,azimuth,latitude)
   render(b)
end

So the goal is to have an easy way to draw points on such a sphere in an 
interactive way. Basically, I am having a hard time figuring out how to 
combine the use of such a composite type with @manipulate.

Anyone know a solution?

As an aside, the PyPlot code that I've written so far may be a nice, 
comprehensive example of 3D plotting using PyPlot . If so, how to make sure 
people can find it?

Furthermore, any hopes of plotting Arrow3D objects from inside Julia? It is 
already possible in matplotlib 
http://stackoverflow.com/questions/29188612/arrows-in-matplotlib-using-mplot3d

- Willem


[julia-users] Adding tuples

2016-08-10 Thread 'Bill Hart' via julia-users
Does anyone know an efficient way to add NTuples in Julia?

I can do it using recursive functions, but for various reasons this is not 
efficient in my context. I really miss something like tuple(a[i] + b[i] for 
i in 1:N) to create the resulting tuple all in one go (here a and b would 
be tuples).

The compiler doesn't do badly with recursive functions for handling tuples 
in very straightforward situations, but for example, if I want to create an 
immutable type based on a tuple the compiler doesn't seem to be able to 
handle the necessary optimisations. At least, that is what I infer from the 
timings. Consider

immutable bill{N}
   d::NTuple{N, Int}
end

and I want to add two bill's together. If I have to add the tuples 
themselves using recursive functions, then I no longer seem to be able to 
do something like:

A[i] = B[i] + C[i] efficiently, where A, B and C are arrays whose elements 
are of type bill.

I know how to handle tuples via arrays, but for efficiency reasons I 
certainly don't want to do that, e.g. tuple([a[i] + b[i] for i in 1:N]...).

Bill.


Re: [julia-users] Re: map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Uwe Fechner
Here:
rc1+1 on the release-0.5 branch has the bug fixed, and may be more useful 
to test against:

https://s3.amazonaws.com/julianightlies/bin/linux/x64/0.5/julia-0.5.0-acfd04c18b-linux64.tar.gz

Uwe


On Wednesday, August 10, 2016 at 1:12:57 PM UTC+2, Tamas Papp wrote:
>
> Where can I find (64bit, Linux) binaries for rc1+1? 
>
> On Wed, Aug 10 2016, Andreas Lobinger wrote: 
>
> > There is a memory allocation(?) bug in 0.5.0-rc1+0, recommendation is to 
> go 
> > to rc1+1. 
> > 
> > On Wednesday, August 10, 2016 at 12:04:36 PM UTC+2, Tamas Papp wrote: 
> >> 
> >> Thanks. Forgot to say that I am using v"0.5.0-rc1+0". 
> >> 
> >> 
>
>

[julia-users] Re: Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Páll Haraldsson
On Wednesday, August 10, 2016 at 6:57:15 AM UTC, Kit Adams wrote:
>
> I am investigating the feasibility of embedding Julia in a C++ real-time 
> signal processing framework, using Julia-0.4.6 (BTW, the performance is 
> looking amazing).
>

There are other thread[s] on Julia and real-time, I'll not repeat here. You 
are ok with Julia for real-time work? Julia strictly isn't real-time, you 
just have to be careful.

I'm not looking into your embedding/GC issues as I'm not too familiar with. 
Seems to me embedding doesn't fundamentally change that the GC isn't 
real-time. And real-time isn't strictly about the performance.


> However, for this usage I need to retain Julia state variables across c++ 
> function calls, so the stack based JL_GC_PUSH() and JL_GC_POP() are not 
> sufficient. 
> When I injected some jl_gc_collect() calls for testing purposes, to 
> simulate having multiple Julia scripts running (from the same thread), I 
> got crashes, which I was able to fix using e.g. jl_gc_preserve(mMyState); 
> and appropriate matching jl_gc_unpreserve() calls.
>
> I see these functions have been removed from the latest Julia version. 
>
> Is there an alternative that allows Julia values to be retained in a C++ 
> app across gc calls?
>

-- 
Palli.
 


Re: [julia-users] Re: map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Tomas Lycken
I don't think there are such binaries - you'd have to build from source - 
but my reproduction was on rc1+1, so that bug is (probably) not relevant to 
this issue.

// T

On Wednesday, August 10, 2016 at 1:12:57 PM UTC+2, Tamas Papp wrote:
>
> Where can I find (64bit, Linux) binaries for rc1+1? 
>
> On Wed, Aug 10 2016, Andreas Lobinger wrote: 
>
> > There is a memory allocation(?) bug in 0.5.0-rc1+0, recommendation is to 
> go 
> > to rc1+1. 
> > 
> > On Wednesday, August 10, 2016 at 12:04:36 PM UTC+2, Tamas Papp wrote: 
> >> 
> >> Thanks. Forgot to say that I am using v"0.5.0-rc1+0". 
> >> 
> >> 
>
>

Re: [julia-users] Re: map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Tamas Papp
Where can I find (64bit, Linux) binaries for rc1+1?

On Wed, Aug 10 2016, Andreas Lobinger wrote:

> There is a memory allocation(?) bug in 0.5.0-rc1+0, recommendation is to go 
> to rc1+1. 
>
> On Wednesday, August 10, 2016 at 12:04:36 PM UTC+2, Tamas Papp wrote:
>>
>> Thanks. Forgot to say that I am using v"0.5.0-rc1+0". 
>>
>>



Re: [julia-users] Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Bart Janssens
On Wed, Aug 10, 2016 at 11:46 AM Kit Adams  wrote:

> Thank you for those links, they are a great help.
>
> Is there an "unprotect_from_gc(T* val)"?
>
> I am looking for a smart pointer a bit like v8's UniquePersistent<>.
>
> I guess I could make one that searched through the array for the value in
> order to remove it (in the smart pointer's dtor).
>
>
No, I didn't need unprotect so far. Iterating over the array is one way,
another option would be to keep a std::map along with the array to
immediately find the index. I think removing elements from the array is
cumbersome, it's probably best to set the value to null (by assigning a
jl_box_voidpointer(nullptr), setting a jl_value_t* to null directly feels
wrong) and depending on how many times objects are added / removed keep a
stack of the empty spots in the array so they can be reused.


Re: [julia-users] map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Mauro
Whilst this does not explain your observation, note that splatting large
vectors is bad for performance.

On Wed, 2016-08-10 at 11:33, Tamas Papp  wrote:
> I was benchmarking code for calculating means of vectors, eg
>
> v = [rand(100) for i in 1:100]
>
> m1(v) = map(mean, zip(v...))
> m2(v) = mapslices(mean, hcat(v...), 2)
>
> @elapsed m1(v)
> @elapsed m2(v)
>
> m2 is faster, but for 1000 vectors,
>
> v = [rand(100) for i in 1:1000]
>
> m1 with zip takes "forever" (CPU keeps running at 100%, I interrupted
> after a 10 minutes). Is this supposed to happen, or should I report it
> as a bug?
>
> Tamas


Re: [julia-users] Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Kit Adams
I guess I could try using std::remove() followed by jl_array_del_end() to 
remove entries.

Cheers,
Kit


On Wednesday, August 10, 2016 at 9:45:55 PM UTC+12, Kit Adams wrote:
>
> Thank you for those links, they are a great help.
>
> Is there an "unprotect_from_gc(T* val)"?
>
> I am looking for a smart pointer a bit like v8's UniquePersistent<>.
>
> I guess I could make one that searched through the array for the value in 
> order to remove it (in the smart pointer's dtor).
>
> Thanks,
> Kit
>
>
> On Wednesday, August 10, 2016 at 8:10:34 PM UTC+12, Bart Janssens wrote:
>>
>>
>>
>> On Wed, Aug 10, 2016 at 9:11 AM Yichao Yu  wrote:
>>
>>> On Wed, Aug 10, 2016 at 2:17 PM, Kit Adams  wrote:
>>>
 I am investigating the feasibility of embedding Julia in a C++ 
 real-time signal processing framework, using Julia-0.4.6 (BTW, the 
 performance is looking amazing).

 However, for this usage I need to retain Julia state variables across 
 c++ function calls, so the stack based JL_GC_PUSH() and JL_GC_POP() are 
 not 
 sufficient. 
 When I injected some jl_gc_collect() calls for testing purposes, to 
 simulate having multiple Julia scripts running (from the same thread), I 
 got crashes, which I was able to fix using e.g. jl_gc_preserve(mMyState); 
 and appropriate matching jl_gc_unpreserve() calls.

 I see these functions have been removed from the latest Julia version. 

 Is there an alternative that allows Julia values to be retained in a 
 C++ app across gc calls?

>>>
>>> Copy from my reply on github
>>>
>>> > This never works in the way you think it did. For keeping a value 
>>> live, put it in a rooted global array.
>>>
>>>
>>>
>> I'm not saying this is the best way to implement Yichao's suggestion, but 
>> here is how it's done in CxxWrap.jl:
>>
>> https://github.com/barche/CxxWrap.jl/blob/master/deps/src/cxx_wrap/type_conversion.hpp#L27-L38
>>
>> The array is allocated and rooted using jl_set_const here:
>>
>> https://github.com/barche/CxxWrap.jl/blob/master/deps/src/cxx_wrap/cxx_wrap.cpp#L14-L28
>>
>> Cheers,
>>
>> Bart
>>
>

Re: [julia-users] Re: map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Andreas Lobinger
There is a memory allocation(?) bug in 0.5.0-rc1+0, recommendation is to go 
to rc1+1. 

On Wednesday, August 10, 2016 at 12:04:36 PM UTC+2, Tamas Papp wrote:
>
> Thanks. Forgot to say that I am using v"0.5.0-rc1+0". 
>
>

Re: [julia-users] Re: map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Tamas Papp
Thanks. Forgot to say that I am using v"0.5.0-rc1+0".

@time tells me that m1 allocates a lot. I understand that using zip for
this purpose is suboptimal, I just want to understand why it does not
scale, because I find zip convenient for a lot of idioms.

On Wed, Aug 10 2016, Tomas Lycken wrote:

> For completeness: I can reproduce this behavior for m1. m2 returns fine 
> after a little over four times the time it took for 100 vectors.
>
> // T
>
> On Wednesday, August 10, 2016 at 11:34:08 AM UTC+2, Tamas Papp wrote:
>>
>> I was benchmarking code for calculating means of vectors, eg 
>>
>> v = [rand(100) for i in 1:100] 
>>
>> m1(v) = map(mean, zip(v...)) 
>> m2(v) = mapslices(mean, hcat(v...), 2) 
>>
>> @elapsed m1(v) 
>> @elapsed m2(v) 
>>
>> m2 is faster, but for 1000 vectors, 
>>
>> v = [rand(100) for i in 1:1000] 
>>
>> m1 with zip takes "forever" (CPU keeps running at 100%, I interrupted 
>> after a 10 minutes). Is this supposed to happen, or should I report it 
>> as a bug? 
>>
>> Tamas 
>>



[julia-users] Re: map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Tomas Lycken
For completeness: I can reproduce this behavior for m1. m2 returns fine 
after a little over four times the time it took for 100 vectors.

// T

On Wednesday, August 10, 2016 at 11:34:08 AM UTC+2, Tamas Papp wrote:
>
> I was benchmarking code for calculating means of vectors, eg 
>
> v = [rand(100) for i in 1:100] 
>
> m1(v) = map(mean, zip(v...)) 
> m2(v) = mapslices(mean, hcat(v...), 2) 
>
> @elapsed m1(v) 
> @elapsed m2(v) 
>
> m2 is faster, but for 1000 vectors, 
>
> v = [rand(100) for i in 1:1000] 
>
> m1 with zip takes "forever" (CPU keeps running at 100%, I interrupted 
> after a 10 minutes). Is this supposed to happen, or should I report it 
> as a bug? 
>
> Tamas 
>


Re: [julia-users] Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Kit Adams
Thank you for those links, they are a great help.

Is there an "unprotect_from_gc(T* val)"?

I am looking for a smart pointer a bit like v8's UniquePersistent<>.

I guess I could make one that searched through the array for the value in 
order to remove it (in the smart pointer's dtor).

Thanks,
Kit


On Wednesday, August 10, 2016 at 8:10:34 PM UTC+12, Bart Janssens wrote:
>
>
>
> On Wed, Aug 10, 2016 at 9:11 AM Yichao Yu  
> wrote:
>
>> On Wed, Aug 10, 2016 at 2:17 PM, Kit Adams > > wrote:
>>
>>> I am investigating the feasibility of embedding Julia in a C++ real-time 
>>> signal processing framework, using Julia-0.4.6 (BTW, the performance is 
>>> looking amazing).
>>>
>>> However, for this usage I need to retain Julia state variables across 
>>> c++ function calls, so the stack based JL_GC_PUSH() and JL_GC_POP() are not 
>>> sufficient. 
>>> When I injected some jl_gc_collect() calls for testing purposes, to 
>>> simulate having multiple Julia scripts running (from the same thread), I 
>>> got crashes, which I was able to fix using e.g. jl_gc_preserve(mMyState); 
>>> and appropriate matching jl_gc_unpreserve() calls.
>>>
>>> I see these functions have been removed from the latest Julia version. 
>>>
>>> Is there an alternative that allows Julia values to be retained in a C++ 
>>> app across gc calls?
>>>
>>
>> Copy from my reply on github
>>
>> > This never works in the way you think it did. For keeping a value live, 
>> put it in a rooted global array.
>>
>>
>>
> I'm not saying this is the best way to implement Yichao's suggestion, but 
> here is how it's done in CxxWrap.jl:
>
> https://github.com/barche/CxxWrap.jl/blob/master/deps/src/cxx_wrap/type_conversion.hpp#L27-L38
>
> The array is allocated and rooted using jl_set_const here:
>
> https://github.com/barche/CxxWrap.jl/blob/master/deps/src/cxx_wrap/cxx_wrap.cpp#L14-L28
>
> Cheers,
>
> Bart
>


[julia-users] Re: Permissions in my 'usr' don't look right

2016-08-10 Thread Colin Beckingham
If I change the dirs julia/usr and julia/usr-staging with a chmod 
recursively to me as a user, then make completes and make testall is 
successful. This is a complete shot in the dark, perhaps someone could let 
me know if I should reinstall from scratch.
I believe this happened in the last 24 hours, yesterday the make ran fine.

On Wednesday, 10 August 2016 01:37:47 UTC-4, Colin Beckingham wrote:
>
> The latest 0.6 is failing to build on my openSUSE system. Specifically it 
> keeps falling over when trying to handle the files and symlinks in usr/lib 
> for libopenlibm.* that cannot be changed. Permissions on these files seem 
> to be ok, but the permissions in the lib directory don't look right. I have 
> a bunch of stuff owned by root for some reason, specifically
> /julia> ls -l usr
> total 24
> drwxr-xr-x 2 colin users 4096 Aug 10 01:14 bin
> drwxr-xr-x 3 colin users   19 Nov 25  2015 etc
> drwxr-xr-x 8 root  root  4096 Aug  9 02:15 include
> drwxr-xr-x 5 root  root  8192 Aug 10 01:22 lib
> drwxr-xr-x 2 colin users6 Nov 25  2015 libexec
> drwxr-xr-x 8 root  root80 Aug  9 02:15 share
> drwxr-xr-x 2 root  root  4096 Aug  9 02:15 tools
> I can't recall having switched to root to do any installs, but it is 
> possible I did so and the memory faded.
>


[julia-users] map(mean, zip(v...)) doesn't scale

2016-08-10 Thread Tamas Papp
I was benchmarking code for calculating means of vectors, eg

v = [rand(100) for i in 1:100]

m1(v) = map(mean, zip(v...))
m2(v) = mapslices(mean, hcat(v...), 2)

@elapsed m1(v)
@elapsed m2(v)

m2 is faster, but for 1000 vectors,

v = [rand(100) for i in 1:1000]

m1 with zip takes "forever" (CPU keeps running at 100%, I interrupted
after a 10 minutes). Is this supposed to happen, or should I report it
as a bug?

Tamas


[julia-users] Re: Why is Julia 0.5 built from source almost twice as large (on disk) as Julia 0.4?

2016-08-10 Thread Tomas Lycken
Thanks for the replies.

Is it safe to assume that anything in deps/build that exists in multiple 
versions, is only needed in the latest of those? For instance, I have 

```
159M deps/build/llvm-3.3
318M deps/build/llvm-3.7.1
881M deps/build/openblas-12ab1804b6ebcd38b26960d65d254314d8bc33d6
943M deps/build/openblas
```

where it seems I could shave off a GB or so by deleting `llvm-3.3` and 
`openblas-`. `deps/srccache` is another 650 MB, can that also be 
deleted?

My main reason for building from source rather than using the binaries is 
that now and then I stumble on something I want to investigate and/or 
improve in base, and the threshold for actually filing a PR is much lower 
if I already have the code I'm running locally. Once 0.5 is out for real 
I'll probably drop the source tree for 0.4, so "temporarily" freeing up a 
couple of gigs by deleting build intermediates is good enough for now.

// T

On Wednesday, August 10, 2016 at 10:49:38 AM UTC+2, Andreas Lobinger wrote:
>
> Hello colleague,
>
> On Wednesday, August 10, 2016 at 10:11:46 AM UTC+2, Tomas Lycken wrote:
>>
>> Both instances of Julia are runnable, so I don’t think I deleted 
>> something I shouldn’t have in either folder. 
>>
>> What has changed to make Julia 0.5 so big? Are there any build artifacts 
>> I can/should prune to reduce this footprint?
>>
> my guess it's some cumulative build artefacts:
>
>  lobi@orange4:~/julia05/deps$ du -sh *
> 8,0Karpack.mk
> 12K blas.mk
> 4,8Gbuild
> 508Kchecksums
> 4,0Kdsfmt.mk
> 8,0Kfftw.mk
> 4,0Kgfortblas.alias
> 8,0Kgfortblas.c
> 4,0Kgmp.mk
> 4,0KlibdSFMT.def
> 4,0Klibgit2.mk
> 4,0Klibgit2.version
> 4,0Klibssh2.mk
> 4,0Klibssh2.version
> 4,0Klibuv.mk
> 4,0Klibuv.version
> 20K llvm.mk
> 4,0Kllvm-ver.make
> 8,0KMakefile
> 4,0Kmbedtls.mk
> 4,0Kmpfr.mk
> 4,0KNATIVE.cmake
> 4,0Kobjconv.mk
> 4,0Kopenblas.version
> 4,0Kopenlibm.mk
> 4,0Kopenlibm.version
> 4,0Kopenspecfun.mk
> 4,0Kopenspecfun.version
> 4,0Kpatchelf.mk
> 328Kpatches
> 4,0Kpcre.mk
> 2,0Gsrccache
> 8,0Ksuitesparse.mk
> 4,0KSuiteSparse_wrapper.c
> 20K tools
> 4,0Kunwind.mk
> 4,0Kutf8proc.mk
> 4,0Kutf8proc.version
> 384Kvalgrind
> 4,0KVersions.make
> 4,0Kvirtualenv.mk
>
>

[julia-users] Re: Why is Julia 0.5 built from source almost twice as large (on disk) as Julia 0.4?

2016-08-10 Thread Kristoffer Carlsson
The .git history for LLVM is also pretty big ~ 500 MB.

I also see I have 3 builds of openblas so if you have multiple of them, you 
could remove the unnecessary ones.

[julia-users] Why is Julia 0.5 built from source almost twice as large (on disk) as Julia 0.4?

2016-08-10 Thread Kristoffer Carlsson
The openblas tests take an extreme amount of space (600MB) so getting rid of 
those is pretty good. They are in "build/openblas-SHAID/{ctest/test}

[julia-users] Re: Why is Julia 0.5 built from source almost twice as large (on disk) as Julia 0.4?

2016-08-10 Thread Andreas Lobinger
Hello colleague,

On Wednesday, August 10, 2016 at 10:11:46 AM UTC+2, Tomas Lycken wrote:
>
> Both instances of Julia are runnable, so I don’t think I deleted something 
> I shouldn’t have in either folder. 
>
> What has changed to make Julia 0.5 so big? Are there any build artifacts I 
> can/should prune to reduce this footprint?
>
my guess it's some cumulative build artefacts:

 lobi@orange4:~/julia05/deps$ du -sh *
8,0Karpack.mk
12K blas.mk
4,8Gbuild
508Kchecksums
4,0Kdsfmt.mk
8,0Kfftw.mk
4,0Kgfortblas.alias
8,0Kgfortblas.c
4,0Kgmp.mk
4,0KlibdSFMT.def
4,0Klibgit2.mk
4,0Klibgit2.version
4,0Klibssh2.mk
4,0Klibssh2.version
4,0Klibuv.mk
4,0Klibuv.version
20K llvm.mk
4,0Kllvm-ver.make
8,0KMakefile
4,0Kmbedtls.mk
4,0Kmpfr.mk
4,0KNATIVE.cmake
4,0Kobjconv.mk
4,0Kopenblas.version
4,0Kopenlibm.mk
4,0Kopenlibm.version
4,0Kopenspecfun.mk
4,0Kopenspecfun.version
4,0Kpatchelf.mk
328Kpatches
4,0Kpcre.mk
2,0Gsrccache
8,0Ksuitesparse.mk
4,0KSuiteSparse_wrapper.c
20K tools
4,0Kunwind.mk
4,0Kutf8proc.mk
4,0Kutf8proc.version
384Kvalgrind
4,0KVersions.make
4,0Kvirtualenv.mk



Re: [julia-users] Why is Julia 0.5 built from source almost twice as large (on disk) as Julia 0.4?

2016-08-10 Thread Tamas Papp
Have you tried git gc? Sometimes it saves a lot of space for me.

Also, unless you really need sources in a git repo, you can try the
binary distributions, which are considerably smaller (around 200Mb).

On Wed, Aug 10 2016, Tomas Lycken wrote:

> After being away from Julia and the community for a couple of months, I 
> started updating my Julia installations today in order to test some of my 
> packages and make sure they’re ready for 0.5, and I started getting 
> warnings about disk usage. Since I’m dual-booting Ubuntu and Windows on a 
> laptop with only about 120GB SSD space, disk space is precious and every GB 
> counts.
>
> I have two parallel installations of Julia on my laptop, both built from 
> source - the release-0.4 branch and the release-0.5 branch (this folder 
> used to contain the master branch, but I checked out release-0.5 today). 
> After building with make (preceded by make cleanall in the 0.5 folder) and 
> pruning everything indicated by git status to be added (mostly tarballs and 
> a couple of repos inside deps/) I now have the following status:
>
> /opt/julia-0.4  release-0.4 $ git status
> On branch release-0.4
> Your branch is up-to-date with 'origin/release-0.4'.
> nothing to commit, working directory clean
>
> /opt/julia-0.5  release-0.5 $ git status
> On branch release-0.5
> Your branch is up-to-date with 'origin/release-0.5'.
> nothing to commit, working directory clean
>
> /opt $ du -hd1 | sort -hr
> 6,8G.
> 4,0G./julia-0.5
> 2,3G./julia-0.4
> # smaller stuff omitted
>
> Both instances of Julia are runnable, so I don’t think I deleted something 
> I shouldn’t have in either folder.
>
> What has changed to make Julia 0.5 so big? Are there any build artifacts I 
> can/should prune to reduce this footprint?
>
> // T
> ​



[julia-users] Why is Julia 0.5 built from source almost twice as large (on disk) as Julia 0.4?

2016-08-10 Thread Tomas Lycken


After being away from Julia and the community for a couple of months, I 
started updating my Julia installations today in order to test some of my 
packages and make sure they’re ready for 0.5, and I started getting 
warnings about disk usage. Since I’m dual-booting Ubuntu and Windows on a 
laptop with only about 120GB SSD space, disk space is precious and every GB 
counts.

I have two parallel installations of Julia on my laptop, both built from 
source - the release-0.4 branch and the release-0.5 branch (this folder 
used to contain the master branch, but I checked out release-0.5 today). 
After building with make (preceded by make cleanall in the 0.5 folder) and 
pruning everything indicated by git status to be added (mostly tarballs and 
a couple of repos inside deps/) I now have the following status:

/opt/julia-0.4  release-0.4 $ git status
On branch release-0.4
Your branch is up-to-date with 'origin/release-0.4'.
nothing to commit, working directory clean

/opt/julia-0.5  release-0.5 $ git status
On branch release-0.5
Your branch is up-to-date with 'origin/release-0.5'.
nothing to commit, working directory clean

/opt $ du -hd1 | sort -hr
6,8G.
4,0G./julia-0.5
2,3G./julia-0.4
# smaller stuff omitted

Both instances of Julia are runnable, so I don’t think I deleted something 
I shouldn’t have in either folder.

What has changed to make Julia 0.5 so big? Are there any build artifacts I 
can/should prune to reduce this footprint?

// T
​


Re: [julia-users] Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Bart Janssens
On Wed, Aug 10, 2016 at 9:11 AM Yichao Yu  wrote:

> On Wed, Aug 10, 2016 at 2:17 PM, Kit Adams  wrote:
>
>> I am investigating the feasibility of embedding Julia in a C++ real-time
>> signal processing framework, using Julia-0.4.6 (BTW, the performance is
>> looking amazing).
>>
>> However, for this usage I need to retain Julia state variables across c++
>> function calls, so the stack based JL_GC_PUSH() and JL_GC_POP() are not
>> sufficient.
>> When I injected some jl_gc_collect() calls for testing purposes, to
>> simulate having multiple Julia scripts running (from the same thread), I
>> got crashes, which I was able to fix using e.g. jl_gc_preserve(mMyState);
>> and appropriate matching jl_gc_unpreserve() calls.
>>
>> I see these functions have been removed from the latest Julia version.
>>
>> Is there an alternative that allows Julia values to be retained in a C++
>> app across gc calls?
>>
>
> Copy from my reply on github
>
> > This never works in the way you think it did. For keeping a value live,
> put it in a rooted global array.
>
>
>
I'm not saying this is the best way to implement Yichao's suggestion, but
here is how it's done in CxxWrap.jl:
https://github.com/barche/CxxWrap.jl/blob/master/deps/src/cxx_wrap/type_conversion.hpp#L27-L38

The array is allocated and rooted using jl_set_const here:
https://github.com/barche/CxxWrap.jl/blob/master/deps/src/cxx_wrap/cxx_wrap.cpp#L14-L28

Cheers,

Bart


Re: [julia-users] Methods inside type definition, use of "self" ?

2016-08-10 Thread Tamas Papp
Broadly yes. In Julia (and Common Lisp, Dylan, etc), methods do not
"belong" to a class intrinsically. If you want to read up about this
paradigm, search for "multiple dispatch" or "multimethods".

On Wed, Aug 10 2016, Willem Hekman wrote:

> Hello all,
>
> I must say that I`m quite new to object oriented programming. 
>
> Do I understand correctly from the manual that in Julia (unlike python) you 
> do not use the keyword "self" and declare methods that apply to a type 
> outside the type definition?
>
> To illustrate, let's say we want to have a type of apple and want to push a 
> flavor to the array of flavors that characterizes an apple:
>
> # define a type: Apple
> type Apple
> brand::ASCIIString
> color::ASCIIString
> flavors::Array{ASCIIString,1}
> 
> 
> end
>
> # a method designed to add flavors to the apple
> function add_flavor(apple::Apple,flavor::ASCIIString)
>
> push!(apple.flavors,flavor)
> end
> # create an instance of an AppleFuji = Apple("Fuji","red",["sweet"])
>
> # add a flavor
> add_flavor(Fuji, "sour")
>
> Is this the way you'd do it in Julia?
>
> In python I got used to putting methods that apply to "Apple" instances 
> inside the type definition where the keyword "self" would be used to add a 
> flavor: push!(self.flavors,flavor)
>
> What would you say?
>
> -Willem



Re: [julia-users] Methods inside type definition, use of "self" ?

2016-08-10 Thread Willem Hekman
Thank you for the quick reply. 

With this example it makes much sense: 

Depending on the language you will have to adopt a different style of 
programming. 

That makes much sense ;-)

On Wednesday, August 10, 2016 at 9:46:26 AM UTC+2, Mauro wrote:
>
> Yes, this is correct.  The difference to classic OO programming 
> languages is that in Julia a method is not "owned" by a type.  Instead 
> it can be owned by several types as dispatch is on all arguments.  In 
> python and other OO dispatch is only on the first (usually implicit) 
> argument.  For your example this does not really matter, but if you have 
> several "equal" types interacting, then it does: 
>
> type Rocket end 
> type Asteroid end 
> type Planet end 
>
> collide(a,b) = collide(b,a) # make it commute 
> collide(::Rocket, ::Union{Asteroid,Planet}) = println("rocket explodes") 
> collide(::Asteroid, ::Planet) = println("dinosaurs die") 
>
> collide(Planet(),Rocket()) # rocket explodes 
> collide(Asteroid(), Planet()) # dinosaurs die 
>
> This would be more awkward to program in python.  I think it's a better 
> mental model to not view Julia as OO. 
>
> On Wed, 2016-08-10 at 09:26, Willem Hekman  > wrote: 
> > Hello all, 
> > 
> > I must say that I`m quite new to object oriented programming. 
> > 
> > Do I understand correctly from the manual that in Julia (unlike python) 
> you 
> > do not use the keyword "self" and declare methods that apply to a type 
> > outside the type definition? 
> > 
> > To illustrate, let's say we want to have a type of apple and want to 
> push a 
> > flavor to the array of flavors that characterizes an apple: 
> > 
> > # define a type: Apple 
> > type Apple 
> > brand::ASCIIString 
> > color::ASCIIString 
> > flavors::Array{ASCIIString,1} 
> > 
> > 
> > end 
> > 
> > # a method designed to add flavors to the apple 
> > function add_flavor(apple::Apple,flavor::ASCIIString) 
> > 
> > push!(apple.flavors,flavor) 
> > end 
> > # create an instance of an AppleFuji = Apple("Fuji","red",["sweet"]) 
> > 
> > # add a flavor 
> > add_flavor(Fuji, "sour") 
> > 
> > Is this the way you'd do it in Julia? 
> > 
> > In python I got used to putting methods that apply to "Apple" instances 
> > inside the type definition where the keyword "self" would be used to add 
> a 
> > flavor: push!(self.flavors,flavor) 
> > 
> > What would you say? 
> > 
> > -Willem 
>


Re: [julia-users] Methods inside type definition, use of "self" ?

2016-08-10 Thread Mauro
Yes, this is correct.  The difference to classic OO programming
languages is that in Julia a method is not "owned" by a type.  Instead
it can be owned by several types as dispatch is on all arguments.  In
python and other OO dispatch is only on the first (usually implicit)
argument.  For your example this does not really matter, but if you have
several "equal" types interacting, then it does:

type Rocket end
type Asteroid end
type Planet end

collide(a,b) = collide(b,a) # make it commute
collide(::Rocket, ::Union{Asteroid,Planet}) = println("rocket explodes")
collide(::Asteroid, ::Planet) = println("dinosaurs die")

collide(Planet(),Rocket()) # rocket explodes
collide(Asteroid(), Planet()) # dinosaurs die

This would be more awkward to program in python.  I think it's a better
mental model to not view Julia as OO.

On Wed, 2016-08-10 at 09:26, Willem Hekman  wrote:
> Hello all,
>
> I must say that I`m quite new to object oriented programming.
>
> Do I understand correctly from the manual that in Julia (unlike python) you
> do not use the keyword "self" and declare methods that apply to a type
> outside the type definition?
>
> To illustrate, let's say we want to have a type of apple and want to push a
> flavor to the array of flavors that characterizes an apple:
>
> # define a type: Apple
> type Apple
> brand::ASCIIString
> color::ASCIIString
> flavors::Array{ASCIIString,1}
>
>
> end
>
> # a method designed to add flavors to the apple
> function add_flavor(apple::Apple,flavor::ASCIIString)
>
> push!(apple.flavors,flavor)
> end
> # create an instance of an AppleFuji = Apple("Fuji","red",["sweet"])
>
> # add a flavor
> add_flavor(Fuji, "sour")
>
> Is this the way you'd do it in Julia?
>
> In python I got used to putting methods that apply to "Apple" instances
> inside the type definition where the keyword "self" would be used to add a
> flavor: push!(self.flavors,flavor)
>
> What would you say?
>
> -Willem


[julia-users] Methods inside type definition, use of "self" ?

2016-08-10 Thread Willem Hekman
Hello all,

I must say that I`m quite new to object oriented programming. 

Do I understand correctly from the manual that in Julia (unlike python) you 
do not use the keyword "self" and declare methods that apply to a type 
outside the type definition?

To illustrate, let's say we want to have a type of apple and want to push a 
flavor to the array of flavors that characterizes an apple:

# define a type: Apple
type Apple
brand::ASCIIString
color::ASCIIString
flavors::Array{ASCIIString,1}


end

# a method designed to add flavors to the apple
function add_flavor(apple::Apple,flavor::ASCIIString)

push!(apple.flavors,flavor)
end
# create an instance of an AppleFuji = Apple("Fuji","red",["sweet"])

# add a flavor
add_flavor(Fuji, "sour")

Is this the way you'd do it in Julia?

In python I got used to putting methods that apply to "Apple" instances 
inside the type definition where the keyword "self" would be used to add a 
flavor: push!(self.flavors,flavor)

What would you say?

-Willem


Re: [julia-users] Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Yichao Yu
On Wed, Aug 10, 2016 at 2:17 PM, Kit Adams  wrote:

> I am investigating the feasibility of embedding Julia in a C++ real-time
> signal processing framework, using Julia-0.4.6 (BTW, the performance is
> looking amazing).
>
> However, for this usage I need to retain Julia state variables across c++
> function calls, so the stack based JL_GC_PUSH() and JL_GC_POP() are not
> sufficient.
> When I injected some jl_gc_collect() calls for testing purposes, to
> simulate having multiple Julia scripts running (from the same thread), I
> got crashes, which I was able to fix using e.g. jl_gc_preserve(mMyState);
> and appropriate matching jl_gc_unpreserve() calls.
>
> I see these functions have been removed from the latest Julia version.
>
> Is there an alternative that allows Julia values to be retained in a C++
> app across gc calls?
>

Copy from my reply on github

> This never works in the way you think it did. For keeping a value live,
put it in a rooted global array.


[julia-users] Replacement for jl_gc_preserve() and jl_gc_unpreserve()?

2016-08-10 Thread Kit Adams
I am investigating the feasibility of embedding Julia in a C++ real-time 
signal processing framework, using Julia-0.4.6 (BTW, the performance is 
looking amazing).

However, for this usage I need to retain Julia state variables across c++ 
function calls, so the stack based JL_GC_PUSH() and JL_GC_POP() are not 
sufficient. 
When I injected some jl_gc_collect() calls for testing purposes, to 
simulate having multiple Julia scripts running (from the same thread), I 
got crashes, which I was able to fix using e.g. jl_gc_preserve(mMyState); 
and appropriate matching jl_gc_unpreserve() calls.

I see these functions have been removed from the latest Julia version. 

Is there an alternative that allows Julia values to be retained in a C++ 
app across gc calls?