[julia-users] Re: Why construction of Dict via generator is not type-stable?

2016-11-15 Thread Pablo Zubieta
FWIW this is should work fine on master and the solution is likely to be 
backported to the next 0.5.x version 
(https://github.com/JuliaLang/julia/pull/19245).

On Sunday, November 13, 2016 at 10:16:20 AM UTC-6, bogumil@gmail.com 
wrote:
>
> Consider the following two functions:
>
> function f1(t, a)
> [t(x) for x in a]
> end
>
> function f2(t, a)
> Dict((x, t(x)) for x in a)
> end
>
> When I run @code_warntype on them I get:
>
> julia> @code_warntype f1(sin, [1.0, 2.0])
> Variables:
>   #self#::#f1
>   t::Base.#sin
>   a::Array{Float64,1}
>
> Body:
>   begin
>   return $(Expr(:invoke, LambdaInfo for 
> collect(::Base.Generator{Array{Float64,1},Base.#sin}), :(Base.collect), 
> :($(Expr(:new, Base.Generator{Array{Float64,1},Base.#sin}, :(t), :(a))
>   end::Array{Float64,1}
>
> julia> @code_warntype f2(sin, [1.0, 2.0])
> Variables:
>   #self#::#f2
>   t::Base.#sin
>   a::Array{Float64,1}
>   #1::##1#2{Base.#sin}
>
> Body:
>   begin
>   #1::##1#2{Base.#sin} = $(Expr(:new, ##1#2{Base.#sin}, :(t)))
>   SSAValue(0) = #1::##1#2{Base.#sin}
>   SSAValue(1) = $(Expr(:new, 
> Base.Generator{Array{Float64,1},##1#2{Base.#sin}}, SSAValue(0), :(a)))
>   return $(Expr(:invoke, LambdaInfo for 
> Dict{K,V}(::Base.Generator{Array{Float64,1},##1#2{Base.#sin}}), 
> :(Main.Dict), SSAValue(1)))
>   end::Union{Dict{Any,Any},Dict{Float64,Float64},Dict{Union{},Union{}}}
>
> I have the folowing questions:
>
>1. Why when f2 is used as written above the return type is not 
>Dict{Float64,Float64} (like in the case of f1 where it is 
>Array{Float64,1})?
>2. How to fix f2 so that Julia can infer the return type?
>3. Is there a way to change f2 so that first empty Dict{of a>,}() variable is created, where type of a> and  are determined based on the 
>passed arguments, and only next this dictionary is populated with data?
>
> With kind regards,
> Bogumil Kaminski
>


Re: [julia-users] Re: How does promote_op work?

2016-09-23 Thread Pablo Zubieta
Sheehan, empty arrays are a problem in general, for now I'd say its better 
to handle the empty case separately for functions. Note that the only 
problematic case should be broadcasting with non-type stable functions over 
empty arrays. Unless that is extremely important for you, you might 
consider overriding promote_op, but again I'd say it is better handling the 
empty case on its own.

On Saturday, September 24, 2016 at 12:18:10 AM UTC+2, Sheehan Olver wrote:
>
> An empty array triggered a bug caused by not dispatching correctly, which 
> I worked around with isempty.  But I could have also overriden promote_op 
> and not had to deal with empty arrays as a special case.
>
>
> On 24 Sep. 2016, at 8:15 am, Pablo Zubieta <pabl...@gmail.com 
> > wrote:
>
> Sheehan, are you planning on doing a lot of operations with empty arrays? 
> On 0.5 with your example
>
> julia> [Foo(1), Foo(2)] + [Foo(1), Foo(0)]
> 2-element Array{Foo,1}:
>  Foo{Float64}(1.0)
>  Foo{Int64}(1)
>
> The problem is empty arrays, when the type cannot be inferred broadcast 
> uses the types of each element to build the array. When there are no 
> elements it doesn't know what type to choose.
>
> On Friday, September 23, 2016 at 11:47:11 PM UTC+2, Sheehan Olver wrote:
>>
>> OK, here's a better example of the issue: in the following code I would 
>> want it to return an *Array(Foo,0)*, not an *Array(Any,0).  *Is this 
>> possible without overriding promote_op?
>>
>> *julia> **immutable Foo{T}*
>>
>>*x::T*
>>
>>*end*
>>
>>
>> *julia> **import Base.+*
>>
>>
>> *julia> **+(a::Foo,b::Foo) = (a.x==b.x? Foo(1.0) : Foo(1))*
>>
>> *+ (generic function with 164 methods)*
>>
>>
>> *julia> **Array(Foo{Float64},0)+Array(Foo{Float64},0)*
>>
>> *0-element Array{Any,1}*
>>
>>
>>
>>
>>
>> On Friday, September 23, 2016 at 10:54:03 PM UTC+10, Pablo Zubieta wrote:
>>>
>>> In julia 0.5 the following should work without needing doing anything to 
>>> promote_op
>>>
>>> import Base.+
>>> immutable Foo end
>>> +(a::Foo, b::Foo) =1.0
>>> Array{Foo}(0) + Array{Foo}(0))
>>>
>>> promote_op is supposed to be an internal method that you wouldn't need 
>>> to override. If it is not working i because the operation you are doing is 
>>> most likely not type stable. So instead of specializing it you could try to 
>>> remove any type instabilities in the method definitions over your types.
>>>
>>> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>>>>
>>>>
>>>> The subject says it all: it looks like one can override promote_op to 
>>>> support the following behaviour:
>>>>
>>>> *julia> **import Base.+*
>>>>
>>>>
>>>> *julia> **immutable Foo end*
>>>>
>>>> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
>>>> REPL[5]:1 overwritten at REPL[10]:1.
>>>>
>>>>
>>>> *julia> **+(a::Foo,b::Foo) = 1.0*
>>>>
>>>> *+ (generic function with 164 methods)*
>>>>
>>>>
>>>> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = 
>>>> Float64*
>>>>
>>>>
>>>> *julia> **Array(Foo,0) + Array(Foo,0)*
>>>>
>>>> *0-element Array{Float64,1}*
>>>>
>>>>
>>>> Is this documented somewhere?  What if we want to override /, -, etc., 
>>>> is the solution to write a promote_op for each case?
>>>>
>>>
>

[julia-users] Re: How does promote_op work?

2016-09-23 Thread Pablo Zubieta
Andrew, I do not understand the details but I believe there are some 
restrictions when using generated functions. You are not supposed to use 
functions with side effects, closures, comprehensions and functions like 
promote_op which rely on inference confuse inference when in generated 
functions.

On Friday, September 23, 2016 at 10:16:17 PM UTC+2, Andrew Keller wrote:
>
> Does the promote_op mechanism in v0.5 play nicely with generated 
> functions? In Unitful.jl, I use a generated function to determine result 
> units after computations involving quantities with units. I seem to get 
> errors (@inferred tests fail) if I remove my promote_op specialization. 
> Perhaps my problems are all a consequence of 
> https://github.com/JuliaLang/julia/issues/18465 and they will go away 
> soon...?
>
> On Friday, September 23, 2016 at 5:54:03 AM UTC-7, Pablo Zubieta wrote:
>>
>> In julia 0.5 the following should work without needing doing anything to 
>> promote_op
>>
>> import Base.+
>> immutable Foo end
>> +(a::Foo, b::Foo) =1.0
>> Array{Foo}(0) + Array{Foo}(0))
>>
>> promote_op is supposed to be an internal method that you wouldn't need to 
>> override. If it is not working i because the operation you are doing is 
>> most likely not type stable. So instead of specializing it you could try to 
>> remove any type instabilities in the method definitions over your types.
>>
>> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>>>
>>>
>>> The subject says it all: it looks like one can override promote_op to 
>>> support the following behaviour:
>>>
>>> *julia> **import Base.+*
>>>
>>>
>>> *julia> **immutable Foo end*
>>>
>>> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
>>> REPL[5]:1 overwritten at REPL[10]:1.
>>>
>>>
>>> *julia> **+(a::Foo,b::Foo) = 1.0*
>>>
>>> *+ (generic function with 164 methods)*
>>>
>>>
>>> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = 
>>> Float64*
>>>
>>>
>>> *julia> **Array(Foo,0) + Array(Foo,0)*
>>>
>>> *0-element Array{Float64,1}*
>>>
>>>
>>> Is this documented somewhere?  What if we want to override /, -, etc., 
>>> is the solution to write a promote_op for each case?
>>>
>>

[julia-users] Re: How does promote_op work?

2016-09-23 Thread Pablo Zubieta
Sheehan, are you planning on doing a lot of operations with empty arrays? 
On 0.5 with your example

julia> [Foo(1), Foo(2)] + [Foo(1), Foo(0)]
2-element Array{Foo,1}:
 Foo{Float64}(1.0)
 Foo{Int64}(1)

The problem is empty arrays, when the type cannot be inferred broadcast 
uses the types of each element to build the array. When there are no 
elements it doesn't know what type to choose.

On Friday, September 23, 2016 at 11:47:11 PM UTC+2, Sheehan Olver wrote:
>
> OK, here's a better example of the issue: in the following code I would 
> want it to return an *Array(Foo,0)*, not an *Array(Any,0).  *Is this 
> possible without overriding promote_op?
>
> *julia> **immutable Foo{T}*
>
>*x::T*
>
>*end*
>
>
> *julia> **import Base.+*
>
>
> *julia> **+(a::Foo,b::Foo) = (a.x==b.x? Foo(1.0) : Foo(1))*
>
> *+ (generic function with 164 methods)*
>
>
> *julia> **Array(Foo{Float64},0)+Array(Foo{Float64},0)*
>
> *0-element Array{Any,1}*
>
>
>
>
>
> On Friday, September 23, 2016 at 10:54:03 PM UTC+10, Pablo Zubieta wrote:
>>
>> In julia 0.5 the following should work without needing doing anything to 
>> promote_op
>>
>> import Base.+
>> immutable Foo end
>> +(a::Foo, b::Foo) =1.0
>> Array{Foo}(0) + Array{Foo}(0))
>>
>> promote_op is supposed to be an internal method that you wouldn't need to 
>> override. If it is not working i because the operation you are doing is 
>> most likely not type stable. So instead of specializing it you could try to 
>> remove any type instabilities in the method definitions over your types.
>>
>> On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>>>
>>>
>>> The subject says it all: it looks like one can override promote_op to 
>>> support the following behaviour:
>>>
>>> *julia> **import Base.+*
>>>
>>>
>>> *julia> **immutable Foo end*
>>>
>>> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
>>> REPL[5]:1 overwritten at REPL[10]:1.
>>>
>>>
>>> *julia> **+(a::Foo,b::Foo) = 1.0*
>>>
>>> *+ (generic function with 164 methods)*
>>>
>>>
>>> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = 
>>> Float64*
>>>
>>>
>>> *julia> **Array(Foo,0) + Array(Foo,0)*
>>>
>>> *0-element Array{Float64,1}*
>>>
>>>
>>> Is this documented somewhere?  What if we want to override /, -, etc., 
>>> is the solution to write a promote_op for each case?
>>>
>>

[julia-users] Re: How does promote_op work?

2016-09-23 Thread Pablo Zubieta
In julia 0.5 the following should work without needing doing anything to 
promote_op

import Base.+
immutable Foo end
+(a::Foo, b::Foo) =1.0
Array{Foo}(0) + Array{Foo}(0))

promote_op is supposed to be an internal method that you wouldn't need to 
override. If it is not working i because the operation you are doing is 
most likely not type stable. So instead of specializing it you could try to 
remove any type instabilities in the method definitions over your types.

On Friday, September 23, 2016 at 5:35:05 AM UTC+2, Sheehan Olver wrote:
>
>
> The subject says it all: it looks like one can override promote_op to 
> support the following behaviour:
>
> *julia> **import Base.+*
>
>
> *julia> **immutable Foo end*
>
> WARNING: Method definition (::Type{Main.Foo})() in module Main at 
> REPL[5]:1 overwritten at REPL[10]:1.
>
>
> *julia> **+(a::Foo,b::Foo) = 1.0*
>
> *+ (generic function with 164 methods)*
>
>
> *julia> **Base.promote_op(::typeof(+),::Type{Foo},::Type{Foo}) = Float64*
>
>
> *julia> **Array(Foo,0) + Array(Foo,0)*
>
> *0-element Array{Float64,1}*
>
>
> Is this documented somewhere?  What if we want to override /, -, etc., is 
> the solution to write a promote_op for each case?
>


[julia-users] Re: Return type of broadcast

2016-09-17 Thread Pablo Zubieta
There is a bit of a conflict between elementwise operations and broadcast 
(both rely on promote_op). There has been issues on the past where people 
wanted things like

1 .+ Number[1, 2, 3] == Number[2, 3, 4]

to work. The current behaviour is consistent with this, which is, if you 
start with a non-concrete container type, it tries to preserve the general 
container type. This works, except for Any. For Any it builds the array 
using the types of each value. The option to have more consistency would be 
to have different promotions mechanisms for broadcast and elementwise unary 
and binary operations.


[julia-users] Re: Any Julia pkgs using Polymer web components?

2016-09-07 Thread Pablo Zubieta
Escher.jl uses Polymer web components. I don't think it exposes them 
directly, but shouldn't be to hard looking at the code to see how it 
incorporates them into its functionality.

On Wednesday, September 7, 2016 at 3:47:08 AM UTC+2, Jeffrey Sarnoff wrote:
>
> If you know of work with Julia and Polymer, please share that information.
>


[julia-users] Re: Adding tuples

2016-08-10 Thread Pablo Zubieta
And just to throw out another option, you might also consider

((a[i] + b[i] for i = 1:N)...)

Pablo.


[julia-users] Re: Adding tuples

2016-08-10 Thread Pablo Zubieta
Does something like this seems good enough?

Base.:+{N}(a::NTuple{N}, b::NTuple{N}) = ntuple(i -> a[i] + b[i], N)

there is also

map(+, a, b)

where `a`, and `b` are the tuples you want to sum elementwise.

There is a chance that eventually one will be able to also use broadcast or 
elementwise addition .+ for doing this.


Re: [julia-users] Changed broadcast semantics in 0.5?

2016-08-04 Thread Pablo Zubieta
The last example won't work even with this fixed. There is a PR for making 
that one work already, but that will have to wait a bit longer.

On Thursday, August 4, 2016 at 9:37:30 AM UTC+2, Oliver Schulz wrote:
>
> Thanks for the heads-up. I tried it out, and indeed
>
> broadcast(muladd, rand(5), rand(5), rand(5))
>
> and
>
> broadcast(muladd, rand(5), 1, rand(5))
>
> work now. Interestingly,
>
> broadcast(muladd, rand(5), rand(5), 1)
>
> Does not ("no method matching promote_eltype_op[...]"). Just curious, is 
> this to be expected?
>
> On Thursday, August 4, 2016 at 5:36:14 AM UTC+2, Kevin Squire wrote:
>>
>> For completeness, PR #17389 was merged 
>> <https://github.com/JuliaLang/julia/pull/17389>, and issue #17314 was 
>> closed <https://github.com/JuliaLang/julia/issues/17314>.
>>
>> Cheers,
>>Kevin
>>
>> On Mon, Aug 1, 2016 at 8:28 AM, Oliver Schulz <oliver...@tu-dortmund.de> 
>> wrote:
>>
>>> Thanks, Pablo. Uh, do you think that PR will make it into 0.5?
>>>
>>>
>>> On Monday, August 1, 2016 at 3:41:23 PM UTC+2, Pablo Zubieta wrote:
>>>>
>>>> This should work if https://github.com/JuliaLang/julia/pull/17389 gets 
>>>> merged.
>>>>
>>>> On Monday, August 1, 2016 at 3:06:36 PM UTC+2, Oliver Schulz wrote:
>>>>>
>>>>> > Not before the bug is fixed and this is also orthogonal to loop 
>>>>> fusion. 
>>>>>
>>>>> Sure, I get that. But that means then that bug is fixed, things like 
>>>>> broadcasting with (e.g.) muladd will be possible again? That would be 
>>>>> wonderful!
>>>>>
>>>>>
>>>>>
>>>>> On Monday, August 1, 2016 at 2:47:44 PM UTC+2, Yichao Yu wrote:
>>>>>>
>>>>>> On Mon, Aug 1, 2016 at 8:41 PM, Oliver Schulz 
>>>>>> <oliver...@tu-dortmund.de> wrote: 
>>>>>> > So cases like 
>>>>>> > 
>>>>>> > broadcast((x,y,z)->..., A, B, C) 
>>>>>> > 
>>>>>> > can't be supported any longer? Darn. :-( I love the things you guys 
>>>>>> are 
>>>>>> > doing in regard to fusing operations, but that was a very, very 
>>>>>> useful thing 
>>>>>> > to have. Is there any other way to do this now? 
>>>>>>
>>>>>> Not before the bug is fixed and this is also orthogonal to loop 
>>>>>> fusion. 
>>>>>>
>>>>>> > 
>>>>>> > On Monday, August 1, 2016 at 2:22:07 PM UTC+2, Yichao Yu wrote: 
>>>>>> >> 
>>>>>> >> On Mon, Aug 1, 2016 at 8:15 PM, Oliver Schulz 
>>>>>> >> <oliver...@tu-dortmund.de> wrote: 
>>>>>> >> > Hi, 
>>>>>> >> > 
>>>>>> >> > sorry if this is already covered somewhere - have the semantics 
>>>>>> of 
>>>>>> >> > broadcast 
>>>>>> >> > changed in Julia 0.5? 
>>>>>> >> 
>>>>>> >> Essentially https://github.com/JuliaLang/julia/issues/17314 
>>>>>> >> The promote_op basically assumes everything is a pure unary or 
>>>>>> binary 
>>>>>> >> operator. 
>>>>>> >> 
>>>>>> >> > 
>>>>>> >> > In 0.4, I can do 
>>>>>> >> > 
>>>>>> >> > broadcast(muladd, rand(5), rand(5), rand(5)) 
>>>>>> >> > 
>>>>>> >> > But in 0.5 (0.5.0-rc0+86), I get 
>>>>>> >> > 
>>>>>> >> > ERROR: MethodError: no method matching muladd(::Float64, 
>>>>>> ::Float64) 
>>>>>> >> > Closest candidates are: 
>>>>>> >> >   muladd(::Float64, ::Float64, ::Float64) at float.jl:247 
>>>>>> >> >   muladd(::Real, ::Real, ::Complex{T<:Real}) at complex.jl:177 
>>>>>> >> >   muladd{T<:Number}(::T<:Number, ::T<:Number, ::T<:Number) at 
>>>>>> >> > promotion.jl:239 
>>>>>> >> >   ... 
>>>>>> >> > [...] 
>>>>>> >> > 
>>>>>> >> > 
>>>>>> >> > Is this a bug, or to be expected? 
>>>>>> >> > 
>>>>>> >> > Cheers, 
>>>>>> >> > 
>>>>>> >> > Oliver 
>>>>>> >> > 
>>>>>>
>>>>>
>>

Re: [julia-users] Changed broadcast semantics in 0.5?

2016-08-01 Thread Pablo Zubieta
This should work if https://github.com/JuliaLang/julia/pull/17389 gets 
merged.

On Monday, August 1, 2016 at 3:06:36 PM UTC+2, Oliver Schulz wrote:
>
> > Not before the bug is fixed and this is also orthogonal to loop fusion. 
>
> Sure, I get that. But that means then that bug is fixed, things like 
> broadcasting with (e.g.) muladd will be possible again? That would be 
> wonderful!
>
>
>
> On Monday, August 1, 2016 at 2:47:44 PM UTC+2, Yichao Yu wrote:
>>
>> On Mon, Aug 1, 2016 at 8:41 PM, Oliver Schulz 
>>  wrote: 
>> > So cases like 
>> > 
>> > broadcast((x,y,z)->..., A, B, C) 
>> > 
>> > can't be supported any longer? Darn. :-( I love the things you guys are 
>> > doing in regard to fusing operations, but that was a very, very useful 
>> thing 
>> > to have. Is there any other way to do this now? 
>>
>> Not before the bug is fixed and this is also orthogonal to loop fusion. 
>>
>> > 
>> > On Monday, August 1, 2016 at 2:22:07 PM UTC+2, Yichao Yu wrote: 
>> >> 
>> >> On Mon, Aug 1, 2016 at 8:15 PM, Oliver Schulz 
>> >>  wrote: 
>> >> > Hi, 
>> >> > 
>> >> > sorry if this is already covered somewhere - have the semantics of 
>> >> > broadcast 
>> >> > changed in Julia 0.5? 
>> >> 
>> >> Essentially https://github.com/JuliaLang/julia/issues/17314 
>> >> The promote_op basically assumes everything is a pure unary or binary 
>> >> operator. 
>> >> 
>> >> > 
>> >> > In 0.4, I can do 
>> >> > 
>> >> > broadcast(muladd, rand(5), rand(5), rand(5)) 
>> >> > 
>> >> > But in 0.5 (0.5.0-rc0+86), I get 
>> >> > 
>> >> > ERROR: MethodError: no method matching muladd(::Float64, ::Float64) 
>> >> > Closest candidates are: 
>> >> >   muladd(::Float64, ::Float64, ::Float64) at float.jl:247 
>> >> >   muladd(::Real, ::Real, ::Complex{T<:Real}) at complex.jl:177 
>> >> >   muladd{T<:Number}(::T<:Number, ::T<:Number, ::T<:Number) at 
>> >> > promotion.jl:239 
>> >> >   ... 
>> >> > [...] 
>> >> > 
>> >> > 
>> >> > Is this a bug, or to be expected? 
>> >> > 
>> >> > Cheers, 
>> >> > 
>> >> > Oliver 
>> >> > 
>>
>

[julia-users] Re: Help with `promote_op` in v0.5

2016-07-29 Thread Pablo Zubieta
This is due to issue https://github.com/JuliaLang/julia/issues/17314, and 
should get fixed if https://github.com/JuliaLang/julia/pull/17389 gets 
merged.

On Friday, July 29, 2016 at 5:00:14 PM UTC+2, j verzani wrote:
>
> The test for the Polynomials package are now failing in v0.5. Before
>
> ```
> p1  = Poly([1, 2])
> p = [p, p] # 2-element Array{Polynomials.Poly{Float64},1}:
> p + 3  # 2-element Array{Polynomials.Poly{Float64},1}:
> ```
>
> But now, `p+3` is an `Array{Any,1}`. I think the solution is related to 
> how `promote_array_type` uses `promote_op`, which in turn returns `Any` for 
> `Base.promote_op(:+, eltype(p), eltype(3))`. There is a comment in 
> promotion.jl about adding methods to `promote_op`, but as `promote_op` 
> isn't exported this seems like the wrong solution for this case. (As well, 
> I couldn't figure out exactly how to do so.) I could also fix this by 
> defining `.+` to directly call `broadcast`, but that defeats the work in 
> `arraymath,jl`.
>
> Any hints?
>


[julia-users] Re: Extending functions in Base (or another module)

2016-06-28 Thread Pablo Zubieta
You might also want to express your perspective on why a function such as 
invoke is needed here https://github.com/JuliaLang/julia/pull/13123.

On Tuesday, June 28, 2016 at 5:26:16 PM UTC+2, Bill Hart wrote:
>
> We have hit an issue that we can't seem to find a workaround for. Our only 
> working workaround is no longer supported by Julia 0.5.
>
> The issue
> 
>
> We implement determinant for various kinds of matrix types in Nemo and 
> Hecke (two computer algebra/number theory packages). To do this, we extend 
> Base.det in Nemo.
>
> Hecke depends on Nemo and tries to extend Nemo.det (or Base.det).
>
> Hecke wants to define det for one of its own types, let's call it 
> SpecialMat. Now Hecke's SpecialMat belongs to Nemo's abstract type MatElem, 
> i.e. SpecialMat <: MatElem.
>
> Nothing unusual so far.
>
> The problem is: Hecke would like to call the det function provided by Nemo 
> in case the algorithm they provide is going to be slower (a decision that 
> is made at runtime).
>
> What we try
> =
>
> So what we naturally try in Hecke is something like the following 
> (obviously this code doesn't actually work):
>
> module Hecke
>
> using Nemo
>
> type SpecialMat <: MatElem  ## Nemo has a MatElem type class and a 
> "generic" det algorithm for any type that belongs to it
># some data
> end
>
> function det(m::SpecialMat)
>
> # do different things depending on properties of the matrix m
>
>if check_some_properties_of_a_matrix(m)
># implementation of special determinant algorithm that only works 
> for Hecke SpecialMat's
>else
>Nemo.det(m) # fallback to the Nemo implementation of det (which is 
> extensive)
>end
> end
>
> export det
>
> end # module
>
> Here are some potential solutions we tried which didn't work:
>
> 1) Only import the functions from Nemo that Hecke doesn't need to 
> overload, i.e. don't import det from Nemo (or Base)
>
> This causes Julia to tell the user that det could refer to Base.det or 
> Hecke.det, even though Base.det doesn't provide a function for the 
> specified type. We certainly can't expect the user to have to look up all 
> the documentation for Base, Nemo and Hecke every time they call a function 
> so they know how to qualify it. So this isn't a workable solution, 
> obviously. It's also far too verbose.
>
> 2) Try to qualify the function name with the name of the module, i.e. call 
> Nemo.det in the Hecke definition of the function, as above. 
>
>This doesn't work, since it is Nemo.det that currently has being 
> overloaded. So the Hecke det function just segfaults when called.
>
> 3) Look up the method table for the function we require and call the 
> specific method. 
>
>This works in Julia 0.4, but the ability to call methods has been 
> removed in 0.5.
>
> This is an exceedingly frustrating problem. In fact it also occurs within 
> Nemo itself, since every time we want to implement a specialised version of 
> a generic function for a specific type, and have it fall back to the 
> generic version in certain cases determined at runtime, we can't do it, 
> without first renaming the generic implementation to something else with a 
> different name.
>
> This sort of thing is making it very difficult to build large systems. 
> It's not fair to the developers of Hecke to ask them to duplicate all the 
> code in Nemo just so they can make this work, or alternatively force Nemo 
> to define every function twice so that there is a callable version with a 
> different name.
>
> Does anyone know a workaround (any hack that works will do) for this 
> issue, that works in Julia 0.4 and Julia 0.5?
>
> And is there a plan to fix this sort of issue in Julia in the future? The 
> module system currently makes it quite hard to work with multiple modules. 
> We are often encouraged to split our large systems into smaller 
> modules/packages to get around certain issues, and then when we do that, 
> the module system actually gets in the way.
>
> Bill.
>


[julia-users] Re: Memory corruption when using BigFloats

2016-06-07 Thread Pablo Zubieta
It would help to have all the fields of the BigFloat, not only d (but also 
prec, sign and exp). I think it can be a problem with the printing 
functions, so having all that might help to figure where the problem is.

On Tuesday, June 7, 2016 at 7:26:02 PM UTC+2, Chris Rackauckas wrote:
>
> That's the main issue here. I do have a way of creating them... in a 
> private branch that I cannot share right now (give it a month?). It's not 
> easy to create them either: essentially you can solve a really stiff SDE 
> and get them, so think about repeatedly taking random numbers in a loop, 
> multiplying them by small numbers, and accumulate them. Every once in 
> awhile these may pop up, but I don't have a minimal example right now that 
> does it. It's dependent on the chosen precision, so maybe my minimal 
> examples just weren't tough enough for it.
>
>  However, I can dump the contents of the BigFloat:
>
> b = [unsafe_load(A[76].d,i) for i in 1:8]
>
> Any[0x255f4da0,0x,0x6f702323,0x32363923,0x696c0030,0x6f635f61,
> 0x335f7970,0x00343230]
>
>   Other than that, it's hard to share since if it hits the REPL in any way 
> (printing, or default displaying) it segfaults. I only found them because 
> they show up as random zeros if you try to plot an array that has them on 
> Windows (on Linux Python just throws an error). If a Julia Dev wants to 
> take a look at it in more detail I'll give them temporary access to the 
> branch.
>
>   Otherwise I'll keep this in the back of my mind and when I release this 
> part of the code I'll show exactly how bigfloats (and only bigfloats) fail 
> here, and cause a segfault. I wish I can be more open but my adviser wants 
> this code private until published, so I am sticking with it (again, 
> everything works except not sufficiently high precision bigs, so it's not 
> necessary for the paper at all).
>
> On Tuesday, June 7, 2016 at 9:58:23 AM UTC-7, Pablo Zubieta wrote:
>>
>> Do you happen to have a minimal reproducible example?
>>
>> On Tuesday, June 7, 2016 at 6:23:05 PM UTC+2, Scott Jones wrote:
>>>
>>> I've been trying to help @ChrisRackaukus (of DifferentialEquations.jl 
>>> fame!) out with a nasty bug he's been running into.
>>> He is using `BigFloat`s, and keeps getting numbers that, when printed, 
>>> cause an MPFR assertion failure.
>>> When I looked at these corrupted `BigFloat`s, I found the following 
>>> strings (they always started off corrupted at the location of the pointer + 
>>> 8)
>>>
>>> *"##po#9620\0lia_copy_3024\0"*
>>>
>>> *"julia_annotations_3495\0\0"*
>>>
>>>
>>> This corruption occurred both running on 64-bit Windows, and on Linux 
>>> (Centos), on Julia v0.4.5.
>>>
>>>
>>> Thanks in advance for any clues as to what is causing this corruption!
>>>
>>>
>>>

[julia-users] Re: Memory corruption when using BigFloats

2016-06-07 Thread Pablo Zubieta
What is the precision of the BigFloat you wrote above?

On Tuesday, June 7, 2016 at 7:26:02 PM UTC+2, Chris Rackauckas wrote:
>
> That's the main issue here. I do have a way of creating them... in a 
> private branch that I cannot share right now (give it a month?). It's not 
> easy to create them either: essentially you can solve a really stiff SDE 
> and get them, so think about repeatedly taking random numbers in a loop, 
> multiplying them by small numbers, and accumulate them. Every once in 
> awhile these may pop up, but I don't have a minimal example right now that 
> does it. It's dependent on the chosen precision, so maybe my minimal 
> examples just weren't tough enough for it.
>
>  However, I can dump the contents of the BigFloat:
>
> b = [unsafe_load(A[76].d,i) for i in 1:8]
>
> Any[0x255f4da0,0x,0x6f702323,0x32363923,0x696c0030,0x6f635f61,
> 0x335f7970,0x00343230]
>
>   Other than that, it's hard to share since if it hits the REPL in any way 
> (printing, or default displaying) it segfaults. I only found them because 
> they show up as random zeros if you try to plot an array that has them on 
> Windows (on Linux Python just throws an error). If a Julia Dev wants to 
> take a look at it in more detail I'll give them temporary access to the 
> branch.
>
>   Otherwise I'll keep this in the back of my mind and when I release this 
> part of the code I'll show exactly how bigfloats (and only bigfloats) fail 
> here, and cause a segfault. I wish I can be more open but my adviser wants 
> this code private until published, so I am sticking with it (again, 
> everything works except not sufficiently high precision bigs, so it's not 
> necessary for the paper at all).
>
> On Tuesday, June 7, 2016 at 9:58:23 AM UTC-7, Pablo Zubieta wrote:
>>
>> Do you happen to have a minimal reproducible example?
>>
>> On Tuesday, June 7, 2016 at 6:23:05 PM UTC+2, Scott Jones wrote:
>>>
>>> I've been trying to help @ChrisRackaukus (of DifferentialEquations.jl 
>>> fame!) out with a nasty bug he's been running into.
>>> He is using `BigFloat`s, and keeps getting numbers that, when printed, 
>>> cause an MPFR assertion failure.
>>> When I looked at these corrupted `BigFloat`s, I found the following 
>>> strings (they always started off corrupted at the location of the pointer + 
>>> 8)
>>>
>>> *"##po#9620\0lia_copy_3024\0"*
>>>
>>> *"julia_annotations_3495\0\0"*
>>>
>>>
>>> This corruption occurred both running on 64-bit Windows, and on Linux 
>>> (Centos), on Julia v0.4.5.
>>>
>>>
>>> Thanks in advance for any clues as to what is causing this corruption!
>>>
>>>
>>>

[julia-users] Re: How to produce the ≉ symbol?

2016-06-03 Thread Pablo Zubieta
Use `\napprox`


Re: [julia-users] [Plots.jl] Sequential color of consecutive data series

2016-03-03 Thread Pablo Zubieta
Thank you, this works fine for me.

I understand the motivation of generating the palette based  on the 
background color and I don't expect this to be implemented. No worries.



[julia-users] Re: let unpack

2016-02-11 Thread Pablo Zubieta
let; (a,b,c) = (1, 2, 3); a end

(note the semicolon) works. But I think it should work as Mauro stated.


[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Pablo Zubieta
I am seeing the opposite on my machine (the Julia version being 4 times 
faster than the one in Python).

It might be a problem with the BLAS library that is being used. What is 
your platform and Julia version?


[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Pablo Zubieta
I just stumble upon this comment 
 on 
this GitHub issue . It 
seems that the current situation in Julia won't let you match the numpy 
code yet (until that issue is solved). But from the comment in the link, 
the following should be faster (it is at least on my machine).

### Julia code
import Base.LinAlg, Base.LinAlg.BlasReal, Base.LinAlg.BlasComplex

Base.disable_threaded_libs()

function sig(x::Union{Float64,Vector})
  return 1.0 ./ (1. + exp(-x))
end

function mf_loop(Ndt::Int64,V0::Vector{Float64},V::Vector{Float64},dt::
Float64,W::Matrix{Float64},J::Matrix{Float64})
sv = copy(V0)
V  = copy(V0)
for i=1:Ndt
#sv = sig(V)
#V = V + ( -V + J*sv ) * dt + W[:,i]
BLAS.gemv!('N',dt,J,sv,1.0-dt,V)
BLAS.axpy!(1., W[:,i], V)
end
nothing 
end

N = 100
dt = 0.1
Ndt = 1

sigma  = 0.1
W = (randn(N,Ndt)) * sigma * sqrt(dt)
J = (rand(N,N))/N
  
V0 = rand(N)
V = copy(V0)

println( "calcul\n")
mf_loop(1,V0,V,dt,W,J)
@time mf_loop(Ndt,V0,V,dt,W,J)

Cheers.


[julia-users] Re: Euler Maruyama for neural networks, slower than python

2016-01-21 Thread Pablo Zubieta
I guess that you can also throw a couple of @inbounds before the for loops 
in Mauro's solution to improve things a bit more.


Re: [julia-users] Re: Sort function

2016-01-20 Thread Pablo Zubieta
I don't know why it was removed, but if you need that to work you can define

function Base.isless(A::AbstractVector, B::AbstractVector)
a, b = length(A), length(B)
a == 0 && return b > 0
for i in 1:min(a,b)
A[i] != B[i] && return A[i] < B[i]
end
return a < b
end

and then your example your work.


[julia-users] Chars with ending double quotes

2016-01-15 Thread Pablo Zubieta
Is there a particular reason for the following to be valid Chars?

'\""'
'\"""' # or any number of "s
'\a"'
'\Ψ"""'

More generally the pattern '\[Any valid code point][Any number of "]' seems 
to be a valid Char.




Re: [julia-users] Chars with ending double quotes

2016-01-15 Thread Pablo Zubieta
I thought so. I filed and issue here:

https://github.com/JuliaLang/julia/issues/14683


Re: [julia-users] Re: A Julia code slower than Matlab, can it be faster?

2015-12-15 Thread Pablo Zubieta
Stefan said that your code is well type (type stable). Have you tried 
Yichao's suggestions?

Here is the function ground() taking them into account:

function ground()
R = 320.
Ks = 2^11 - 1
x = linspace(-R, R, Ks+1)
dx = 2R/Ks
dt = 0.1
pm = 2π/2dx
px = circshift( linspace(-pm, pm, Ks+1),  round(Int64, Ks/2) )
FFTW.set_num_threads(8)


V = Float64[Potential(ix, iy)  for ix in x, iy in x]
ko = Float64[e^(-dt/4* (Ko(ipx,ipy))) for ipx in px, ipy in px]
ko2 = ko.^2
vo = Float64[e^(-dt* (Potential(ix,iy))) for ix in x, iy in x]
ϕ = Array(Complex128,(Ks+1,Ks+1))
Pl = plan_fft!(ϕ; flags=FFTW.MEASURE)
for i in eachindex(ϕ); ϕ[i] = V[i]; end # Conversion is automatic here


invnormphi = 1. / (sqrt(sumabs2(ϕ))*dx)
scale!(ϕ, invnormphi)


Pl * ϕ # No need to assign to ϕ
for i in eachindex(ϕ); ϕ[i] *= ko[i]; end
Pl \ ϕ

ϕ′ = similar(ϕ)

Δϕ = 1.
nstep = 0
println("start loop")
while Δϕ > 1.e-15
copy!(ϕ′, ϕ)
for i in eachindex(ϕ), ϕ[i] *= vo[i]; end

invnormphi = 1. / (sqrt(sumabs2(ϕ))*dx)
scale!(ϕ, invnormphi)


Pl * ϕ
for i in eachindex(ϕ); ϕ[i] *= ko2[i]; end
Pl \ ϕ
# if check  Δϕ for every step, 35s is needed.
if nstep>500
Δϕ = maxabs(ϕ-ϕ′)
end
nstep += 1
if mod(nstep,200)==0
print(nstep,"  ")
println("Δϕ=",Δϕ)
end

end
for i in eachindex(ϕ); ϕ[i] *= vo[i]; end

Pl * ϕ
for i in eachindex(ϕ); ϕ[i] *= ko[i]; end
Pl \ ϕ


invnormphi = 1. / (sqrt(sumabs2(ϕ))*dx)
scale!(ϕ, invnormphi)
end





[julia-users] `<` and `isless` for GradientNumber (ForwardDiff.jl)

2015-12-10 Thread Pablo Zubieta
Hi everyone

I was using ForwardDiff.jl and found that I can't use <(g::GradientNumber, 
x::Real), whereas I can use isless(g::GradientNumber, x::Real)

If I define a custom type and the corresponding isless method I can use < 
automatically. Why is that it is not the case for GradienNumber?

Thanks,
Pablo Zubieta


[julia-users] Re: `<` and `isless` for GradientNumber (ForwardDiff.jl)

2015-12-10 Thread Pablo Zubieta
It doesn't work with the master branch of ForwardDiff.jl neither. So I 
filled a bug report:

https://github.com/JuliaDiff/ForwardDiff.jl/issues/76


[julia-users] Re: `<` and `isless` for GradientNumber (ForwardDiff.jl)

2015-12-10 Thread Pablo Zubieta
I'm using julia 0.4.1. Here's an example

using ForwardDiff

foo(l, x) = ifelse( abs(x) < l, sin(x), cos(x) )
bar(l, x) = ifelse( isless(abs(x),l), sin(x), cos(x) )

derivative(x->foo(2,x), 1.) # ERROR: < not defined for ForwardDiff.
GradientNumber{1,Float64,Tuple{Float64}}
derivative(x->bar(2,x), 1.) # 0.5403023058681398

derivative(x->foo(2,x), 3.) # ERROR: < not defined for ForwardDiff.
GradientNumber{1,Float64,Tuple{Float64}}
derivative(x->bar(2,x), 3.) # -0.1411200080598672



immutable FooType
v
end

value(f::FooType) = f.v
Base.isless(f::FooType, x::Real) = value(f) < x
Base.isless(x::Real, f::FooType) = x < value(f)

FooType(3) < 4 # true
FooType(3) < 1 # false

1 < FooType(3) # true
3 < FooType(3) # false




[julia-users] Re: `<` and `isless` for GradientNumber (ForwardDiff.jl)

2015-12-10 Thread Pablo Zubieta
Ok, I looked a little deeper and for some reason

<(g::GradientNumber, x::Real)

is calling

<(x::Real, y::Real) = <(promote(x,y)...)

which in the case of the example above promotes both arguments to 
GradientNumber. `isless(g1::GradientNumber, g2::GradientNumber)`. This 
method is not defined in the ForwardDiff.jl version I have but I can there 
seems to be a solution to this now 
<https://github.com/JuliaDiff/ForwardDiff.jl/blob/master/src/ForwardDiffNumber.jl>
.

I will try to chechout the newest version. Anyway if that doesn't fix this 
I will report the bug there.

Sorry for the noise.

Cheers,
Pablo Zubieta



[julia-users] Re: `<` and `isless` for GradientNumber (ForwardDiff.jl)

2015-12-10 Thread Pablo Zubieta
Ok, I looked a little deeper and for some reason

<(g::GradientNumber, x::Real)

is calling

<(x::Real, y::Real) = <(promote(x,y)...)

which in the case of the example above promotes both arguments to 
GradientNumber. The method `isless(g1::GradientNumber, g2::GradientNumber)` 
is not defined in the ForwardDiff.jl version I have, but I can see that 
there seems to be a solution to this now 
<https://github.com/JuliaDiff/ForwardDiff.jl/blob/master/src/ForwardDiffNumber.jl>
.

I will try to chechout the newest version. Anyway if that doesn't fix this 
I will report the bug there.

Sorry for the noise.

Cheers,
Pablo Zubieta


Re: [julia-users] Re: unknown option root

2015-10-12 Thread Pablo Zubieta
If you are running Ubuntu you should run

sudo apt-get install xscreensaver-data-extra

instead.


[julia-users] Re: problem with bigfloat precision

2015-05-14 Thread Pablo Zubieta
I would think it is even more robust a definition like

mylog(b, x) = (promote(b, x); log(x) ./ log(b))

in case someone wants to write mylog(2.1, 2).

Stefan, should this be the definition in Base?


[julia-users] Re: problem with bigfloat precision

2015-05-14 Thread Pablo Zubieta
Nevermind I saw your PR.


[julia-users] Re: Performance difference between running in REPL and calling a script?

2015-03-27 Thread Pablo Zubieta
Hi Michael

As Stefan just pointed out, in Julia one should in general take advantage 
of the Multiple Dispatch paradigm and write functions that dispatch on the 
types instead of having functions belonging to the type (as you would have 
in Python).

Let me give you a simple example by rewriting you Fyle type.

You have something like this

type Fyle
size::Float64 
origSize::Float64 
lifetime::Int

get_lifetime::Function 
get_size::Function 

function Fyle(size = 0) 
this = new() 

this.size=size 
this.lifetime=0 
this.origSize=size 
 
this.get_lifetime = function() 
   return(this.lifetime) 
end
this.get_size = function() 
   return(this.size) 
end

return this 
end 
end

when you should have really written something like this

# I am going to define a method size for you type Fyle
# size is a generic function in Base, so one should import it on order to 
be able to extend it for a user Type
import Base.size

type Fyle
size::Float64 
origSize::Float64 
lifetime::Int
end

# Outer constructor (in some cases you might want to define an inner one as 
you did)
# Read the manual section on Constructors
Fyle(size=0) = Fyle(size, size, 0)

# Define the desired functions to dispatch on the Type
lifetime(f::Fyle) = f.lifetime
size(f::Fyle) = f.size

Hope this helps.

Greetings, Pablo




[julia-users] Re: Print(char(0x08))

2015-03-06 Thread Pablo Zubieta
It works for me in the julia 0.3.6 REPL (on linux), but it does not seem to 
work on IJulia.


[julia-users] Re: Constructing matrices

2015-02-23 Thread Pablo Zubieta
Yeah, I have never thought of that, but it seems kind of confusing.

You can do

julia hvcat(1, 1, 2, 3)
3x1 Array{Int64,2}:
 1
 2
 3

julia hvcat(1, 1, 2, 3)'' == hvcat(1, 1, 2, 3)
true

Greetings.


Re: [julia-users] Numerical precision of simple mathematical operations

2015-02-12 Thread Pablo Zubieta
In your case you may want to try with arbitrary precision floats (BigFloat)

So just define

x  = BigFloat[1, 0.045126, 6.9315]
w1 = w2 = w3 = BigFloat[1 1 1]
w4 = BigFloat[1 -10 -10]

and call your function as

compute(f1[1], f2[1], f3[1], d[1], BigFloat(0), BigFloat(153))


Re: [julia-users] Levi-Civita symbol/tensor

2015-02-11 Thread Pablo Zubieta
Hi again,

There were some bugs in my implementations. I updated the gist 
https://gist.github.com/pabloferz/01675f1bf4c8be359767#file-levicivita-jl 
with the corrected versions and added a simpler looking function (but of 
O(n²) running time).

I did some tests and found (with my slow processor) that for permutations 
of length = 5 the quadratic implementation (levicivita_simple) performs as 
fast as the (levicivita_inplace_check). For lengths from 5 to 15, 
levicivita_inplace_check 
is the fastest, followed by levicivita_simple. For lengths from 15 to 25 
levicivita_simple 
and levicivita perform the same (but slower than levicivita_inplace_check). 
For more than 25 elements levicivita_inplace_check is always the fastest, 
2x faster than levicivita and n times faster than levicivita_simple.

For people wanting the 3D Levi-Civita tensor, levicivita_simple and 
levicivita_inplace_check 
should be the same. For people wanting the parity of a permutation for long 
permutations levicivita_inplace_check should work the best.

Greetings!


Re: [julia-users] Levi-Civita symbol/tensor

2015-02-11 Thread Pablo Zubieta
Done!

https://github.com/JuliaLang/julia/issues/10172


Re: [julia-users] Levi-Civita symbol/tensor

2015-02-10 Thread Pablo Zubieta
Hi all!

I believe that

sign{T:Integer}(perm::AbstractVector{T})

is already defined and returns something like

[sign(n) for n in perm]

So, I think it would be better to call it by other name, or with a 
different signature.

By looking at the code at the code used in Permutations, I came up with two 
solutions that should run in O(n) time. They are optimized versions of
Permutations.parity and can be found here:

https://gist.github.com/pabloferz/01675f1bf4c8be359767#file-levicivita-jl

The second option, is almost the same as the first one, but instead of 
first checking if the Vector is a permutation at the begining, it does the 
check in-place. There's probably not much difference between the two, I 
have not analyzed this deeper. Anyway, I believe both should work fine.

Cheers!


Re: [julia-users] Levi-Civita symbol/tensor

2015-02-10 Thread Pablo Zubieta
The first loop in the previous post might as well be

for i = 1:n-1

Sorry, for the quantity of posts.


Re: [julia-users] Levi-Civita symbol/tensor

2015-02-10 Thread Pablo Zubieta
Also, a brute-force (but simple-looking) approach that is O(n²) might be

function levicivita{T:Integer}(p::AbstractVector{T})
n = lenght(p)
ɛ = 1

for i = 1:n
for j = i+1:n
ɛ *= sign(p[j] - p[i])
ɛ == 0  return 0
end
end

return ɛ
end



[julia-users] Re: JuliaBox

2014-11-10 Thread Pablo Zubieta
Hi Shashi, I would like a code too.

Thanks in advance,
Pablo


[julia-users] Re: Benchmarks for Julia 0.3x ???

2014-10-28 Thread Pablo Zubieta
I bee trying to help with this. And already fixed the java and R issues (I 
think). I've been thinking on how to fix the other, but I would like some 
feedback first.

To fix the C and Fortran issue 
https://github.com/JuliaLang/julia/issues/4821 (which basically consists 
on avoid constant-folding and propagation) I was thinking on write an 
external (configuration) file to hold the pertinent constants and read the 
constants from that file, so their values won't be known until running time.

Would this seem a good solution?


[julia-users] Re: Benchmarks for Julia 0.3x ???

2014-10-23 Thread Pablo Zubieta
I've been asking myself the same as Uwe. Is there any place where we can 
find this information?

Greetings,
Pablo


[julia-users] immutable types with array fields

2014-09-13 Thread Pablo Zubieta
Hi everyone

I was wondering (since it seems that we are allowed to use array fields 
inside immutable types) if there are problems or any loose on the benefits 
of using immutable types if I do one of the following:


   1. Have an array field that is going to maintain its size but not the 
   values of its entries once the immutable is instantiated.
   2. Have an array field of variable size.
   3. Have an array field that will be unchanged.


I can't think of more scenarios for now, but I am curious of what could 
happen in general by using arrays as fields of immutables. May be I 
shouldn't be doing it at all. Does anybody know?


[julia-users] Re: immutable types with array fields

2014-09-13 Thread Pablo Zubieta
I see. Yeah, I wasn't sure if there was already some work done on 
fixed-size arrays. But it is great to know there are plans of working on it.

Thanks, for the answer.


[julia-users] Re: ANN: PGF/TikZ packages

2014-08-23 Thread Pablo Zubieta
@Kaj Wiik

I believe you need to use:

a = Plot.Linear(rand(20), rand(20))

Checkout here:

http://nbviewer.ipython.org/github/sisl/PGFPlots.jl/blob/master/doc/PGFPlots.ipynb

Cheers,
Pablo


[julia-users] Error using Pkg.add

2014-08-10 Thread Pablo Zubieta
I am currently using Ubuntu 14.04 and the Julia Nightlies PPA. Recently, I 
updated to the 0.3.0-rc2 versions and I get the following error when trying 
to add a package:

signal (4): Illegal instruction
_mapreduce at ./reduce.jl:168
prune_versions at ./pkg/query.jl:141
prune_dependencies at ./pkg/query.jl:335
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown 
line)
resolve at ./pkg/entry.jl:379
edit at pkg/entry.jl:24
jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown 
line)
anonymous at task.jl:340
unknown function (ip: -1389497680)
unknown function (ip: -1389497544)
julia_trampoline at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown 
line)
unknown function (ip: 4199613)
__libc_start_main at /lib/x86_64-linux-gnu/libc.so.6 (unknown line)
unknown function (ip: 4199667)
unknown function (ip: 0)
Illegal instruction

Is there a way to fix this?


[julia-users] Re: Error using Pkg.add

2014-08-10 Thread Pablo Zubieta
My processor is an AMD Turion X2. I was using the 0.3.0-rc1 version without 
a problem.



[julia-users] Re: Error using Pkg.add

2014-08-10 Thread Pablo Zubieta
Is it safe to remove the sys.so file?


[julia-users] Re: Error using Pkg.add

2014-08-10 Thread Pablo Zubieta
I renamed the file, but it does not sort out the problem.


[julia-users] Re: Error using Pkg.add

2014-08-10 Thread Pablo Zubieta
I forgot to mention, I am not running a VM.


[julia-users] Re: Error using Pkg.add

2014-08-10 Thread Pablo Zubieta
I can't see an sse3 flag in /proc/cpuinfo. Here it is anyway.

processor: 0
vendor_id: AuthenticAMD
cpu family: 17
model: 3
model name: AMD Turion(tm) X2 Dual-Core Mobile RM-70
stepping: 1
microcode: 0x232
cpu MHz: 500.000
cache size: 512 KB
physical id: 0
siblings: 2
core id: 0
cpu cores: 2
apicid: 0
initial apicid: 0
fpu: yes
fpu_exception: yes
cpuid level: 1
wp: yes
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt 
rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid 
pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch osvw 
skinit hw_pstate lbrv svm_lock nrip_save
bogomips: 4000.39
TLB size: 1024 4K pages
clflush size: 64
cache_alignment: 64
address sizes: 40 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate

processor: 1
vendor_id: AuthenticAMD
cpu family: 17
model: 3
model name: AMD Turion(tm) X2 Dual-Core Mobile RM-70
stepping: 1
microcode: 0x232
cpu MHz: 2000.000
cache size: 512 KB
physical id: 0
siblings: 2
core id: 1
cpu cores: 2
apicid: 1
initial apicid: 1
fpu: yes
fpu_exception: yes
cpuid level: 1
wp: yes
flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt 
rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid 
pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy 3dnowprefetch osvw 
skinit hw_pstate lbrv svm_lock nrip_save
bogomips: 4000.39
TLB size: 1024 4K pages
clflush size: 64
cache_alignment: 64
address sizes: 40 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate

Should I still open an issue?


[julia-users] Re: Error using Pkg.add

2014-08-10 Thread Pablo Zubieta
Done!

Here is the link to the issue https://github.com/JuliaLang/julia/issues/7946
.


[julia-users] Re: Strange LLVM error

2014-08-01 Thread Pablo Zubieta
I don't thinks that leads to an infinite loop. Instead it creates a 
12-dimensional matrix with 3^12 (531441) elements. If you use

rand(ones(Int, 12)...);

I guess it won't take nothing to calculate the matrix. The problem is 
printing or showing it.


Re: [julia-users] Re: essay on the history of programming languages

2014-07-16 Thread Pablo Zubieta
Ismael, although ASCIIString is a subtype of String, Array{ASCIIString,1} 
is not a subtype of Array{String,1}. As far as I understand Julia has been 
designed this way for practical reasons. You can read about it in the 
manual's section: Types.

You might want to rewrite your function as

function Pkg.add{T:String}(pkgs::Array{T, 1})
Pkg.update()
for pkg in pkgs 
 
Pkg.add(pkg)
end
end



[julia-users] Re: Manual deallocation

2014-07-08 Thread Pablo Zubieta
Thanks for the answers. I agree that a manual chapter would be helpful.