Re: [julia-users] [ANN] more error-free transformations

2015-11-25 Thread Tom Breloff
Actually this is really smart. You can represent exact values (lsp == 0) or
open intervals one ulp wide... This is a good chunk of the value of fixed
sized unums. I'll be keeping a close eye on the package. Thanks Jeffrey.

On Wednesday, November 25, 2015, Jeffrey Sarnoff 
wrote:

>
> These are distinct operators that substitute directly for (+),(-),(*),(/)
> in situations where one wants to obtain more of mathematically true result
> than is usually available:
>
> two = 2.0; sqrt2 = sqrt(2);
> residualValueRoundedAway = Float64(sqrt(big(2)) - sqrt2) #
> -9.667293313452913e-17
>
> mostSignficantPart, leastSignificantPart = eftSqrt(two)
> mostSignificantPart ==  1.4142135623730951
> leastSignificantPart == -9.667293313452912e-17 # we recover the residual
> value, itself at Float64 precision
>
> so we obtain the arithmetic result at twice the 'working' precision (in
> two parts, the mspart == the usual result).
>
> exp1log2 = exp(1.0)*log(2.0);
>   #  1.88416938536372
> residualValueRoundedAway = Float64(exp(big(1))*log(big(2)) - exp1log2) #
>  8.146538547111741e-17
>
> mostSignficantPart, leastSignificantPart = eftProd2( exp(1.0), log(2.0) )
>   # (1.88416938536372, -8.177744937186283e-17)
>
> --
> These transformations have the additional benefit that the two parts are
> well separated, they do not overlap in the working precision.
> So, in all cases, mostSignificantPart + leastSignificantPart ==
> mostSignificantPart.
> They are as well separated as possible, without losing information.
>
> These functions are well-suited to assisting the implementation of
> extended precision Floating Point math.
> Another application (that, until otherwise informed, I'll say is from me)
> is to accelerate inline rounding:
>   (RoundFast.jl , there to see
> how).
>
> Assuming one had a Float64 unum-ish capability, a double-double float
> would extend the precision.
> (Ultimately, all these parts should meld)
>
>
> On Wednesday, November 25, 2015 at 9:19:08 AM UTC-5, Tom Breloff wrote:
>>
>> Thanks Jeffrey. Can you expand on the specifics of the package?  What
>> would you say are the primary use cases? How does this differ from interval
>> arithmetic or Unums?
>>
>> On Wednesday, November 25, 2015, Jeffrey Sarnoff 
>> wrote:
>>
>>> ErrorFreeArith.jl  offers
>>> error-free transformations not (yet?) included in the ErrorFreeTransforms
>>> package by dsiem.
>>>
>>> These operations convey the usual arithmetic result accompanied by a
>>> residual value that is usually lost to rounding.
>>> This gives the correct value at twice the working precision (correctly
>>> rounded for +,-,*,/; still 1/(1/x) = x or x ± ulp(x)).
>>>
>>>
>>>
>>>


Re: [julia-users] Efficiently constructing a sparse matrix

2015-11-25 Thread Tim Holy
I = Int[]
J = Int[]
V = Float64[]

while have_more_entries_to_add()
 i, j, v = get_next_entry()
push!(I, i); push!(J, j); push!(V, v)
end

arry = sparse(I, J, V)

--Tim

On Wednesday, November 25, 2015 08:05:12 AM Cedric St-Jean wrote:
> How do people usually do this? Assignment is slow
> 
> arr = spzeros(1000,1000)
> for ...
>arr[i,j] = ...
> ...
> 
> and indeed, in scipy one would use a non-CSC format for construction, then
> convert it to CSC. I've seen some comments that one should just build the
> 
> SparseMatrixCSC object directly for performance. Is that still the usual
> advice?
> 
> Cédric



Re: [julia-users] how to make my own 3d array type

2015-11-25 Thread Evan Mason
Great, thanks.  So, I now have:

immutable MyType2{T,N,A<:AbstractArray, I<:Int} <: AbstractArray{T,N}
var :: A
nstar :: I
end

MyType2{T,N}(var::AbstractArray{T,N}, nstar::Int) = MyType2{T,N,
typeof(var), typeof(nstar)}(var, nstar)

Base.linearindexing(::Type{MyType2}) = Base.LinearFast()
Base.size(S::MyType2) = (S.nstar,)

and I found base/linalg/eigen.jl helpful in figuring this out.


On Wed, Nov 25, 2015 at 12:53 PM, Tim Holy  wrote:

> Check out the "Types" section of the manual, esp. the part about "Composite
> Types."
>
> Also, see base/subarray.jl if you want an example of a complex but full-
> featured AbstractArray type.
>
> Best,
> --Tim
>
> On Wednesday, November 25, 2015 03:24:39 AM Evan wrote:
> > Thanks Tim,
> >
> > Following the link you suggested I made the following type which does
> what
> > I want:
> >
> > immutable MyType{T,N,A<:AbstractArray} <: AbstractArray{Float64,3}
> > conf_lims :: A
> > end
> > MyType{T,N}(conf_lims::AbstractArray{T,N}) =
> MyType{T,N,typeof(conf_lims)}(
> > conf_lims)
> > Base.size(S::MyType) = (S.count,)
> > Base.linearindexing(::Type{MyType}) = Base.LinearFast();
> >
> >
> > What I haven't figured out is how to add more than just the one variable;
> > for instance I'd like to have a "count" variable which would be an
> integer?
> >
> > On Monday, November 16, 2015 at 3:45:43 PM UTC+1, Tim Holy wrote:
> > > Totally different error (it's a missing size method). You also need to
> > > read
> > > this:
> > > http://docs.julialang.org/en/stable/manual/interfaces/#abstract-arrays
> > >
> > > --Tim
> > >
> > > On Monday, November 16, 2015 06:08:15 AM Evan wrote:
> > > > Thank you, Tim, I understand the motivation for the approach in the
> link
> > > > you sent.
> > > >
> > > > I'm still unable to create the type that I want. I actually want two
> > >
> > > arrays
> > >
> > > > in my type, one 2d and the other 3d.
> > > >
> > > > First though, I'm still stuck on just the 2d when I try to implement
> > >
> > > your
> > >
> > > > suggestion:
> > > > julia> type MyType{T,N,A<:AbstractArray} <: AbstractArray{T,N}
> > > >
> > > >var::A
> > > >
> > > >end
> > > >
> > > > julia> MyType{T,N}(var::AbstractArray{T,N}) =
> > >
> > > MyType{T,N,typeof(var)}(var)
> > >
> > > > MyType{T,N,A<:AbstractArray{T,N}}
> > > >
> > > > julia> aa = MyType(zeros(2,3))
> > > > Error showing value of type MyType{Float64,2,Array{Float64,2}}:
> > > > ERROR: MethodError: `size` has no method matching
> > >
> > > size(::MyType{Float64,2,
> > >
> > > > Array{Float64,2}})
> > > >
> > > > Closest candidates are:
> > > >   size{T,n}(::AbstractArray{T,n}, ::Any)
> > > >   size(::Any, ::Integer, ::Integer, ::Integer...)
> > > >
> > > >  in showarray at show.jl:1231
> > > >  in anonymous at replutil.jl:29
> > > >  in with_output_limit at ./show.jl:1271
> > > >  in writemime at replutil.jl:28
> > > >  in display at REPL.jl:114
> > > >  in display at REPL.jl:117
> > > >  [inlined code] from multimedia.jl:151
> > > >  in display at multimedia.jl:162
> > > >  in print_response at REPL.jl:134
> > > >  in print_response at REPL.jl:121
> > > >  in anonymous at REPL.jl:624
> > > >  in run_interface at ./LineEdit.jl:1610
> > > >  in run_frontend at ./REPL.jl:863
> > > >  in run_repl at ./REPL.jl:167
> > > >  in _start at ./client.jl:420
> > > >
> > > > julia>
> > > >
> > > > Performance is important, but it's not clear what I should use in
> place
> > >
> > > of
> > >
> > > > abstract types; I've tried replacing AbstractArray with just Array
> but
> > >
> > > that
> > >
> > > > does not appear to work.
> > > >
> > > > On Monday, November 16, 2015 at 1:03:05 PM UTC+1, Tim Holy wrote:
> > > > > This fixes two problems:
> > > > >
> > > > > type MyType{T,N,A<:AbstractArray} <: AbstractArray{T,N}
> > > > >
> > > > > var::A
> > > > >
> > > > > end
> > > > >
> > > > > MyType{T,N}(var::AbstractArray{T,N}) = MyType{T,N,typeof(var)}(var)
> > > > >
> > > > >
> > > > >
> > > > > If performance matters, you should not use abstract types for
> fields.
> > >
> > > See
> > >
> > >
> > >
> http://docs.julialang.org/en/stable/manual/faq/#how-do-abstract-or-ambiguo
> > >
> > > > > us-fields-in-types-interact-with-the-compiler and the section after
> > >
> > > that.
> > >
> > > > > --Tim
> > > > >
> > > > > On Monday, November 16, 2015 03:54:15 AM Evan wrote:
> > > > > > For a 2d array the following works:
> > > > > >
> > > > > > type mytype{T}
> > > > > >
> > > > > > var :: AbstractMatrix{T}
> > > > > >
> > > > > > end
> > > > > >
> > > > > > julia> t = mytype(zeros(4, 3))
> > > > > >
> > > > > > mytype{Float64}(4x3 Array{Float64,2}:
> > > > > >  0.0  0.0  0.0
> > > > > >  0.0  0.0  0.0
> > > > > >  0.0  0.0  0.0
> > > > > >  0.0  0.0  0.0)
> > > > > >
> > > > > > But how do I extend this to a 3d array?
> > > > > >
> > > > > > julia> t = mytype(zeros(2, 4, 3))
> > > > > > ERROR: MethodError: `convert` has no method matching
> > > > >
> > > > > convert(::Type{mytype{T
> > 

Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-25 Thread Sergio Muniz
It works fine here, on Ubuntu 14.04 and CUDA 6.5.

Cheers,
[S].



On Tuesday, November 24, 2015 at 9:30:32 PM UTC-2, Joaquim Masset Lacombe 
Dias Garcia wrote:
>
> Interesting, both my machines (windows and mac) have CUDA 7.0, possibly 
> thats the issue, since the C code in 
> https://github.com/JuliaGPU/CURAND.jl/issues/3#issuecomment-159319580 
> fails in both.
>
> If some linux user could test this version, our statistics would be more 
> complete. I will try 6.5 and 7.5
>
> Em terça-feira, 24 de novembro de 2015 21:21:41 UTC-2, Tim Holy escreveu:
>>
>> 6.5 
>>
>> --Tim 
>>
>> On Wednesday, November 25, 2015 01:37:22 AM Andrei wrote: 
>> > On Tue, Nov 24, 2015 at 8:03 PM, Kristoffer Carlsson <
>> kcarl...@gmail.com> 
>> > wrote: 
>> > > The original code in the OP fails for me 
>> > 
>> > Yes, this is expected behavior: for convenience, CURAND.jl creates 
>> default 
>> > random number generator, which obviously becomes invalid after 
>> > `device_reset()`. At the same time, my last code snippet creates new 
>> and 
>> > explicit RNG, which fixes the issue on Linux. 
>> > 
>> > The problem is that on some platforms even creating new generator 
>> doesn't 
>> > help, so I'm trying to understand the difference. 
>> > 
>> > @Tim, @Kristoffer, could you also specify CUDA version in use, please? 
>>
>>

Re: [julia-users] Proposal: NoveltyColors.jl

2015-11-25 Thread Timothée Poisot
I would support this as a package on its own. Having all of these
palettes in a single place would be better than having to install
multiple packages, imho.

Randy Zwitch (11/24):
> Since the Julia ecosystem is getting bigger, I figured I'd propose this 
> here first and see what people think is the right way forward (instead of 
> wasting people's time at METADATA)
> 
> In the R community, they've created two packages of novelty color schemes: 
> Wes 
> Anderson  and Beyonce 
> . While humorous, these color palettes are 
> interesting to me and I'd like to make them available in Vega.jl (and Julia 
> more broadly). Should I:
> 
> 1) Not do it at allbecause this is a serious, scientific community!
> 2) Do two separate packages, mimicking R
> 3) Create a single NoveltyColors.jl package, in case there are other 
> palettes that come up in the future
> 4) Make a feature request at Colors.jl (really not my favorite choice, 
> since there is so much cited research behind the palettes)
> 
> I neglected to mention ColorBrewer.jl (which Vega.jl uses), since 
> ColorBrewer is a known entity in the plotting community.
> 
> What do people think? Note, I'm not looking for anyone to do the work (I'll 
> do it), just looking for packaging input.


-- 
Timothée Poisot
Professeur adjoint
Quantitative and Computational Ecology
Department of Biological Sciences
Université de Montréal

WEB  http://poisotlab.io/
TWITTER  @PoisotLab



Re: [julia-users] [ANN] more error-free transformations

2015-11-25 Thread Jeffrey Sarnoff
Yep; and I appreciate the note.

On Wed, Nov 25, 2015 at 11:00 AM, Tom Breloff  wrote:

> Actually this is really smart. You can represent exact values (lsp == 0)
> or open intervals one ulp wide... This is a good chunk of the value of
> fixed sized unums. I'll be keeping a close eye on the package. Thanks
> Jeffrey.
>
> On Wednesday, November 25, 2015, Jeffrey Sarnoff <
> jeffrey.sarn...@gmail.com> wrote:
>
>>
>> These are distinct operators that substitute directly for (+),(-),(*),(/)
>> in situations where one wants to obtain more of mathematically true result
>> than is usually available:
>>
>> two = 2.0; sqrt2 = sqrt(2);
>> residualValueRoundedAway = Float64(sqrt(big(2)) - sqrt2) #
>> -9.667293313452913e-17
>>
>> mostSignficantPart, leastSignificantPart = eftSqrt(two)
>> mostSignificantPart ==  1.4142135623730951
>> leastSignificantPart == -9.667293313452912e-17 # we recover the residual
>> value, itself at Float64 precision
>>
>> so we obtain the arithmetic result at twice the 'working' precision (in
>> two parts, the mspart == the usual result).
>>
>> exp1log2 = exp(1.0)*log(2.0);
>>   #  1.88416938536372
>> residualValueRoundedAway = Float64(exp(big(1))*log(big(2)) - exp1log2) #
>>  8.146538547111741e-17
>>
>> mostSignficantPart, leastSignificantPart = eftProd2( exp(1.0), log(2.0) )
>>   # (1.88416938536372, -8.177744937186283e-17)
>>
>> --
>> These transformations have the additional benefit that the two parts are
>> well separated, they do not overlap in the working precision.
>> So, in all cases, mostSignificantPart + leastSignificantPart ==
>> mostSignificantPart.
>> They are as well separated as possible, without losing information.
>>
>> These functions are well-suited to assisting the implementation of
>> extended precision Floating Point math.
>> Another application (that, until otherwise informed, I'll say is from me)
>> is to accelerate inline rounding:
>>   (RoundFast.jl , there to
>> see how).
>>
>> Assuming one had a Float64 unum-ish capability, a double-double float
>> would extend the precision.
>> (Ultimately, all these parts should meld)
>>
>>
>> On Wednesday, November 25, 2015 at 9:19:08 AM UTC-5, Tom Breloff wrote:
>>>
>>> Thanks Jeffrey. Can you expand on the specifics of the package?  What
>>> would you say are the primary use cases? How does this differ from interval
>>> arithmetic or Unums?
>>>
>>> On Wednesday, November 25, 2015, Jeffrey Sarnoff 
>>> wrote:
>>>
 ErrorFreeArith.jl  offers
 error-free transformations not (yet?) included in the ErrorFreeTransforms
 package by dsiem.

 These operations convey the usual arithmetic result accompanied by a
 residual value that is usually lost to rounding.
 This gives the correct value at twice the working precision (correctly
 rounded for +,-,*,/; still 1/(1/x) = x or x ± ulp(x)).






Re: [julia-users] how to make my own 3d array type

2015-11-25 Thread Tim Holy
One more small tweak: Int is a concrete type, so you can use nstar::Int in the 
type definition (you don't need to add another type parameter).

Glad to hear you're off to the races now.

Best,
--Tim

On Wednesday, November 25, 2015 06:29:33 PM Evan Mason wrote:
> Great, thanks.  So, I now have:
> 
> immutable MyType2{T,N,A<:AbstractArray, I<:Int} <: AbstractArray{T,N}
> var :: A
> nstar :: I
> end
> 
> MyType2{T,N}(var::AbstractArray{T,N}, nstar::Int) = MyType2{T,N,
> typeof(var), typeof(nstar)}(var, nstar)
> 
> Base.linearindexing(::Type{MyType2}) = Base.LinearFast()
> Base.size(S::MyType2) = (S.nstar,)
> 
> and I found base/linalg/eigen.jl helpful in figuring this out.
> 
> On Wed, Nov 25, 2015 at 12:53 PM, Tim Holy  wrote:
> > Check out the "Types" section of the manual, esp. the part about
> > "Composite
> > Types."
> > 
> > Also, see base/subarray.jl if you want an example of a complex but full-
> > featured AbstractArray type.
> > 
> > Best,
> > --Tim
> > 
> > On Wednesday, November 25, 2015 03:24:39 AM Evan wrote:
> > > Thanks Tim,
> > > 
> > > Following the link you suggested I made the following type which does
> > 
> > what
> > 
> > > I want:
> > > 
> > > immutable MyType{T,N,A<:AbstractArray} <: AbstractArray{Float64,3}
> > > 
> > > conf_lims :: A
> > > 
> > > end
> > > MyType{T,N}(conf_lims::AbstractArray{T,N}) =
> > 
> > MyType{T,N,typeof(conf_lims)}(
> > 
> > > conf_lims)
> > > Base.size(S::MyType) = (S.count,)
> > > Base.linearindexing(::Type{MyType}) = Base.LinearFast();
> > > 
> > > 
> > > What I haven't figured out is how to add more than just the one
> > > variable;
> > > for instance I'd like to have a "count" variable which would be an
> > 
> > integer?
> > 
> > > On Monday, November 16, 2015 at 3:45:43 PM UTC+1, Tim Holy wrote:
> > > > Totally different error (it's a missing size method). You also need to
> > > > read
> > > > this:
> > > > http://docs.julialang.org/en/stable/manual/interfaces/#abstract-arrays
> > > > 
> > > > --Tim
> > > > 
> > > > On Monday, November 16, 2015 06:08:15 AM Evan wrote:
> > > > > Thank you, Tim, I understand the motivation for the approach in the
> > 
> > link
> > 
> > > > > you sent.
> > > > > 
> > > > > I'm still unable to create the type that I want. I actually want two
> > > > 
> > > > arrays
> > > > 
> > > > > in my type, one 2d and the other 3d.
> > > > > 
> > > > > First though, I'm still stuck on just the 2d when I try to implement
> > > > 
> > > > your
> > > > 
> > > > > suggestion:
> > > > > julia> type MyType{T,N,A<:AbstractArray} <: AbstractArray{T,N}
> > > > > 
> > > > >var::A
> > > > >
> > > > >end
> > > > > 
> > > > > julia> MyType{T,N}(var::AbstractArray{T,N}) =
> > > > 
> > > > MyType{T,N,typeof(var)}(var)
> > > > 
> > > > > MyType{T,N,A<:AbstractArray{T,N}}
> > > > > 
> > > > > julia> aa = MyType(zeros(2,3))
> > > > > Error showing value of type MyType{Float64,2,Array{Float64,2}}:
> > > > > ERROR: MethodError: `size` has no method matching
> > > > 
> > > > size(::MyType{Float64,2,
> > > > 
> > > > > Array{Float64,2}})
> > > > > 
> > > > > Closest candidates are:
> > > > >   size{T,n}(::AbstractArray{T,n}, ::Any)
> > > > >   size(::Any, ::Integer, ::Integer, ::Integer...)
> > > > >  
> > > > >  in showarray at show.jl:1231
> > > > >  in anonymous at replutil.jl:29
> > > > >  in with_output_limit at ./show.jl:1271
> > > > >  in writemime at replutil.jl:28
> > > > >  in display at REPL.jl:114
> > > > >  in display at REPL.jl:117
> > > > >  [inlined code] from multimedia.jl:151
> > > > >  in display at multimedia.jl:162
> > > > >  in print_response at REPL.jl:134
> > > > >  in print_response at REPL.jl:121
> > > > >  in anonymous at REPL.jl:624
> > > > >  in run_interface at ./LineEdit.jl:1610
> > > > >  in run_frontend at ./REPL.jl:863
> > > > >  in run_repl at ./REPL.jl:167
> > > > >  in _start at ./client.jl:420
> > > > > 
> > > > > julia>
> > > > > 
> > > > > Performance is important, but it's not clear what I should use in
> > 
> > place
> > 
> > > > of
> > > > 
> > > > > abstract types; I've tried replacing AbstractArray with just Array
> > 
> > but
> > 
> > > > that
> > > > 
> > > > > does not appear to work.
> > > > > 
> > > > > On Monday, November 16, 2015 at 1:03:05 PM UTC+1, Tim Holy wrote:
> > > > > > This fixes two problems:
> > > > > > 
> > > > > > type MyType{T,N,A<:AbstractArray} <: AbstractArray{T,N}
> > > > > > 
> > > > > > var::A
> > > > > > 
> > > > > > end
> > > > > > 
> > > > > > MyType{T,N}(var::AbstractArray{T,N}) =
> > > > > > MyType{T,N,typeof(var)}(var)
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > If performance matters, you should not use abstract types for
> > 
> > fields.
> > 
> > > > See
> > 
> > http://docs.julialang.org/en/stable/manual/faq/#how-do-abstract-or-ambiguo
> > 
> > > > > > us-fields-in-types-interact-with-the-compiler and the section
> > > > > > after
> > > > 
> > > > that.
> > > > 
> > > > > > --Tim
> > > > > > 
> > > > > > 

Re: [julia-users] Efficiently constructing a sparse matrix

2015-11-25 Thread Cedric St-Jean
Ah, I missed that in the docs. Thank you. I'll wrap it into SparseMatrixCOO.

On Wednesday, November 25, 2015 at 11:48:16 AM UTC-5, Tim Holy wrote:
>
> I = Int[] 
> J = Int[] 
> V = Float64[] 
>
> while have_more_entries_to_add() 
>  i, j, v = get_next_entry() 
> push!(I, i); push!(J, j); push!(V, v) 
> end 
>
> arry = sparse(I, J, V) 
>
> --Tim 
>
> On Wednesday, November 25, 2015 08:05:12 AM Cedric St-Jean wrote: 
> > How do people usually do this? Assignment is slow 
> > 
> > arr = spzeros(1000,1000) 
> > for ... 
> >arr[i,j] = ... 
> > ... 
> > 
> > and indeed, in scipy one would use a non-CSC format for construction, 
> then 
> > convert it to CSC. I've seen some comments that one should just build 
> the 
> > 
> > SparseMatrixCSC object directly for performance. Is that still the usual 
> > advice? 
> > 
> > Cédric 
>
>

[julia-users] Efficiently constructing a sparse matrix

2015-11-25 Thread Cedric St-Jean
How do people usually do this? Assignment is slow

arr = spzeros(1000,1000)
for ...
   arr[i,j] = ...
...

and indeed, in scipy one would use a non-CSC format for construction, then 
convert it to CSC. I've seen some comments that one should just build the 

SparseMatrixCSC object directly for performance. Is that still the usual advice?

Cédric



[julia-users] [OT?] Will Bret Victor's answer to "What can a technologist do about climate change?" get you to file your first PR? :)

2015-11-25 Thread Tomas Lycken


A friend just sent me a blog post by Bret Victor, whom I find to be one of 
the more inspiring individuals in the tech community today. The blog post 
is titled

*What can a technologist do about climate change? (a personal view) 
* (
http://worrydream.com/#!/ClimateChange)

It’s long, but it’s well-informed and well-written. After four chapters 
focusing on things that aren’t so specific to the tech-community but 
actually applicable to almost anyone (funding, energy production, energy 
transport and energy consumption), he starts talking about “Tools for 
scientists and engineers”, and (after noting that most tools today aren’t 
great), he says

Here’s an opinion you might not hear much — I feel that one effective 
approach to addressing climate change is contributing to the development of 
Julia .

This makes at least me motivated to push the limits even further here :)

// T
​


Re: [julia-users] [OT?] Will Bret Victor's answer to "What can a technologist do about climate change?" get you to file your first PR? :)

2015-11-25 Thread Stefan Karpinski
That section is worth quoting in its entirety:

*Languages for technical computing*

I was hanging out with climate scientists while working on Al Gore’s book,
and they mostly did their thing in R. This is typical; most statistical
research is done in R. The language seems to inspire a level of vitriol
that’s impressive even by programmers’ standards. Even R’s advocates always
seem sort of apologetic.

Using R is a bit akin to smoking. The beginning is difficult, one may get
> headaches and even gag the first few times. But in the long run, it becomes
> pleasurable and even addictive. Yet, deep down, for those willing to be
> honest, there is something not fully healthy in it.


Complementary to R is Matlab, the primary language for numerical computing
in many scientific and engineering fields. It’s ubiquitous. Matlab has been
described as “the PHP of scientific computing”.

MATLAB is not good. Do not use it.


R and Matlab are both forty years old, weighed down with forty years of
cruft and bad design decisions. Scientists and engineers use them because
they are the vernacular, and there are no better alternatives.


Here’s an opinion you might not hear much — I feel that one effective
approach to addressing climate change is contributing to the development of
Julia. Julia is a modern technical language, intended to replace Matlab, R,
SciPy, and C++ on the scientific workbench. It’s immature right now, but it
has beautiful foundations, enthusiastic users, and a lot of potential.

I say this despite the fact that my own work has been in much the opposite
direction as Julia. Julia inherits the textual interaction of classic
Matlab, SciPy and other children of the teletype — source code and command
lines.

The goal of my own research has been tools where scientists see what
they’re doing in realtime, with immediate visual feedback and interactive
exploration. I deeply believe that a sea change in invention and discovery
is possible, once technologists are working in environments designed around:

(lots of inline images)


Obviously I think this approach is important, and I urge you to pursue it
if it speaks to you.

At the same time, I’m also happy to endorse Julia because, well, it’s just
about the only example of well-grounded academic research in technical
computing. It’s the craziest thing. I’ve been following the programming
language community for a decade, I’ve spoken at SPLASH and POPL and Strange
Loop, and it’s only slightly an unfair generalization to say that almost
every programming language researcher is working on

(a) languages and methods for software developers,
(b) languages for novices or end-users,
(c) implementation of compilers or runtimes, or
(d) theoretical considerations, often of type systems.


The very concept of a “programming language” originated with languages for
scientists — now such languages aren’t even part of the discussion! Yet
they remain the tools by which humanity understands the world and builds a
better one.

If we can provide our climate scientists and energy engineers with a
civilized computing environment, I believe it will make a very significant
difference. But not one that is easily visible or measured!

On Wed, Nov 25, 2015 at 10:38 AM, Tomas Lycken 
wrote:

> A friend just sent me a blog post by Bret Victor, whom I find to be one of
> the more inspiring individuals in the tech community today. The blog post
> is titled
>
> *What can a technologist do about climate change? (a personal view)
> * (
> http://worrydream.com/#!/ClimateChange)
>
> It’s long, but it’s well-informed and well-written. After four chapters
> focusing on things that aren’t so specific to the tech-community but
> actually applicable to almost anyone (funding, energy production, energy
> transport and energy consumption), he starts talking about “Tools for
> scientists and engineers”, and (after noting that most tools today aren’t
> great), he says
>
> Here’s an opinion you might not hear much — I feel that one effective
> approach to addressing climate change is contributing to the development of
> Julia .
>
> This makes at least me motivated to push the limits even further here :)
>
> // T
> ​
>


[julia-users] Re: Is build_executable.jl working?

2015-11-25 Thread Joaquim Masset Lacombe Dias Garcia
Have you tried this one: https://github.com/dhoegh/BuildExecutable.jl
I did NOT have success but if you do you might even help me! 

this might help 
https://github.com/JuliaLang/julia/issues/11816#issuecomment-148784543
and this: 
https://groups.google.com/forum/#!searchin/julia-users/buil_executable/julia-users/TKDqlhxSAhk/L6uwtKJtpBEJ

Em quarta-feira, 25 de novembro de 2015 20:58:51 UTC-2, Jérémy Béjanin 
escreveu:
>
> Hello,
>
> I was wondering if the build_executable function was working. I tried 
> using it on Windows and got the following error (I included some lines 
> above the error for context):
>
> ...
>> require.jl
>> docs/helpdb.jl
>> docs/basedocs.jl
>> C:\Users\Jeremy\AppData\Local\Julia-0.4.1\share\julia\base\precompile.jl
>> INFO: Linking sys.dll
>> INFO: System image successfully built at 
>> C:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin\libtestitnow.dll
>> WARNING: Building sys.dll on Windows against LLVM < 3.5.0 can cause 
>> incorrect backtraces! Delete generated sys.dll to av
>> oid these problems
>> INFO: To run Julia with this image loaded, run: julia -J 
>> C:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin\libtestitnow.dll
>>
>> running: 
>> C:\Users\Jeremy\.julia\v0.4\WinRPM\deps\usr\x86_64-w64-mingw32\sys-root\mingw\bin\gcc.exe
>>  
>> -g -Wl,--no-as-needed
>> `-D_WIN32_WINNT=0x0502` 
>> -IC:\Users\Jeremy\AppData\Local\Julia-0.4.1\include\julia 
>> -IC:\Users\Jeremy\AppData\Local\src -
>> IC:\Users\Jeremy\AppData\Local\src/support 
>> -IC:\Users\Jeremy\AppData\Local\usr/include 
>> -IC:\Users\Jeremy\.julia\v0.4\Win
>> RPM\deps\usr\x86_64-w64-mingw32\sys-root\mingw\include 
>> C:\Users\Jeremy\AppData\Local\Temp\jul12B2.tmp\start_func.c -o C:
>> \Users\Jeremy\AppData\Local\Julia-0.4.1\bin\testitnow.exe 
>> -Wl,-rpath,C:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin -LC:\
>> Users\Jeremy\AppData\Local\Julia-0.4.1\bin -ljulia -ltestitnow
>> C:\Users\Jeremy\AppData\Local\Temp\jul12B2.tmp\start_func.c: In function 
>> 'main':
>> C:\Users\Jeremy\AppData\Local\Temp\jul12B2.tmp\start_func.c:19:5: error: 
>> too few arguments to function 'jl_atexit_hook'
>> jl_atexit_hook();
>> ^
>> In file included from 
>> C:\Users\Jeremy\AppData\Local\Temp\jul12B2.tmp\start_func.c:1:0:
>> C:\Users\Jeremy\AppData\Local\Julia-0.4.1\include\julia/julia.h:1188:16: 
>> note: declared here
>> DLLEXPORT void jl_atexit_hook(int status);
>> ^
>> ERROR: failed process: 
>> Process(setenv(`'C:\Users\Jeremy\.julia\v0.4\WinRPM\deps\usr\x86_64-w64-mingw32\sys-root\mingw\bi
>> n\gcc.exe' -g -Wl,--no-as-needed -D_WIN32_WINNT=0x0502 
>> '-IC:\Users\Jeremy\AppData\Local\Julia-0.4.1\include\julia' '-IC:
>> \Users\Jeremy\AppData\Local\src' 
>> '-IC:\Users\Jeremy\AppData\Local\src/support' 
>> '-IC:\Users\Jeremy\AppData\Local\usr/incl
>> ude' 
>> '-IC:\Users\Jeremy\.julia\v0.4\WinRPM\deps\usr\x86_64-w64-mingw32\sys-root\mingw\include'
>>  
>> 'C:\Users\Jeremy\AppData\
>> Local\Temp\jul12B2.tmp\start_func.c' -o 
>> 'C:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin\testitnow.exe' 
>> '-Wl,-rpath,C:\Use
>> rs\Jeremy\AppData\Local\Julia-0.4.1\bin' 
>> '-LC:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin' -ljulia 
>> -ltestitnow`,Union{AS
>> CIIString,UTF8String}["=C:=C:\\Users\\Jeremy\\AppData\\Local\\Julia-0.4.1","ADMSXMLBINDIR=C:\\Program
>>  
>> Files (x86)\\Qucs\
>>
>> \bin","ADS_LICENSE_FILE=6609@localhost","ALLUSERSPROFILE=C:\\ProgramData","APPDATA=C:\\Users\\Jeremy\\AppData\\Roaming",
>> "ASCOBINDIR=C:\\Program Files 
>> (x86)\\Qucs\\bin","CommonProgramFiles=C:\\Program Files\\Common 
>> Files","CommonProgramFiles
>> (x86)=C:\\Program Files (x86)\\Common 
>> Files","CommonProgramW6432=C:\\Program Files\\Common 
>> Files","COMPUTERNAME=JBEJANIN
>>
>> ","ComSpec=C:\\Windows\\system32\\cmd.exe","FLEXLM_TIMEOUT=200","FP_NO_HOST_CHECK=NO","HOME=C:\\Users\\Jeremy","HOME
>>
>> DRIVE=C:","HOMEPATH=\\Users\\Jeremy","LM_LICENSE_FILE=6065@lka-ic-01;6065@localhost","LOCALAPPDATA=C:\\Users\\Jeremy\\Ap
>> pData\\Local","LOGONSERVER=JBEJANIN","MATLAB_HOME=C:\\Program 
>> Files\\MATLAB\\R2015a","MOZ_PLUGIN_PATH=C:\\PROGRAM FI
>> LES (X86)\\FOXIT SOFTWARE\\FOXIT 
>> READER\\plugins\\","NIDAQmxSwitchDir=C:\\Program Files (x86)\\National 
>> Instruments\\NI-
>> DAQ\\Switch\\","NUMBER_OF_PROCESSORS=4","OCTAVEDIR=C:\\Program Files 
>> (x86)\\Qucs\\share\\qucs\\octave","OPENBLAS_MAIN_FR
>>
>> EE=1","OS=Windows_NT","Path=C:\\Users\\Jeremy\\AppData\\Local\\Julia-0.4.1\\bin;C:\\Users\\Jeremy\\AppData\\Local\\Julia
>> -0.4.1\\bin\\..\\Git\\bin;C:\\Users\\Jeremy\\AppData\\Local\\Julia-0.4.1\\bin\\..\\Git\\usr\\bin;C:\\Program
>>  
>> Files (x86)
>> \\SumatraPDF;C:\\Program Files (x86)\\NVIDIA 
>> Corporation\\PhysX\\Common;C:\\ProgramData\\Oracle\\Java\\javapath;C:\\Prog
>> ram Files 
>> (x86)\\Qucs\\bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsP
>> owerShell\\v1.0\\;D:\\Programs\\MiKTeX 
>> 2.9\\miktex\\bin\\x64\\;C:\\Users\\Jeremy\\AppData\\Local\\Julia-0.4.1\\bin;C:\\P
>> rogram 

[julia-users] Is build_executable.jl working?

2015-11-25 Thread Jérémy Béjanin
Hello,

I was wondering if the build_executable function was working. I tried using 
it on Windows and got the following error (I included some lines above the 
error for context):

...
> require.jl
> docs/helpdb.jl
> docs/basedocs.jl
> C:\Users\Jeremy\AppData\Local\Julia-0.4.1\share\julia\base\precompile.jl
> INFO: Linking sys.dll
> INFO: System image successfully built at 
> C:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin\libtestitnow.dll
> WARNING: Building sys.dll on Windows against LLVM < 3.5.0 can cause 
> incorrect backtraces! Delete generated sys.dll to av
> oid these problems
> INFO: To run Julia with this image loaded, run: julia -J 
> C:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin\libtestitnow.dll
>
> running: 
> C:\Users\Jeremy\.julia\v0.4\WinRPM\deps\usr\x86_64-w64-mingw32\sys-root\mingw\bin\gcc.exe
>  
> -g -Wl,--no-as-needed
> `-D_WIN32_WINNT=0x0502` 
> -IC:\Users\Jeremy\AppData\Local\Julia-0.4.1\include\julia 
> -IC:\Users\Jeremy\AppData\Local\src -
> IC:\Users\Jeremy\AppData\Local\src/support 
> -IC:\Users\Jeremy\AppData\Local\usr/include 
> -IC:\Users\Jeremy\.julia\v0.4\Win
> RPM\deps\usr\x86_64-w64-mingw32\sys-root\mingw\include 
> C:\Users\Jeremy\AppData\Local\Temp\jul12B2.tmp\start_func.c -o C:
> \Users\Jeremy\AppData\Local\Julia-0.4.1\bin\testitnow.exe 
> -Wl,-rpath,C:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin -LC:\
> Users\Jeremy\AppData\Local\Julia-0.4.1\bin -ljulia -ltestitnow
> C:\Users\Jeremy\AppData\Local\Temp\jul12B2.tmp\start_func.c: In function 
> 'main':
> C:\Users\Jeremy\AppData\Local\Temp\jul12B2.tmp\start_func.c:19:5: error: 
> too few arguments to function 'jl_atexit_hook'
> jl_atexit_hook();
> ^
> In file included from 
> C:\Users\Jeremy\AppData\Local\Temp\jul12B2.tmp\start_func.c:1:0:
> C:\Users\Jeremy\AppData\Local\Julia-0.4.1\include\julia/julia.h:1188:16: 
> note: declared here
> DLLEXPORT void jl_atexit_hook(int status);
> ^
> ERROR: failed process: 
> Process(setenv(`'C:\Users\Jeremy\.julia\v0.4\WinRPM\deps\usr\x86_64-w64-mingw32\sys-root\mingw\bi
> n\gcc.exe' -g -Wl,--no-as-needed -D_WIN32_WINNT=0x0502 
> '-IC:\Users\Jeremy\AppData\Local\Julia-0.4.1\include\julia' '-IC:
> \Users\Jeremy\AppData\Local\src' 
> '-IC:\Users\Jeremy\AppData\Local\src/support' 
> '-IC:\Users\Jeremy\AppData\Local\usr/incl
> ude' 
> '-IC:\Users\Jeremy\.julia\v0.4\WinRPM\deps\usr\x86_64-w64-mingw32\sys-root\mingw\include'
>  
> 'C:\Users\Jeremy\AppData\
> Local\Temp\jul12B2.tmp\start_func.c' -o 
> 'C:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin\testitnow.exe' 
> '-Wl,-rpath,C:\Use
> rs\Jeremy\AppData\Local\Julia-0.4.1\bin' 
> '-LC:\Users\Jeremy\AppData\Local\Julia-0.4.1\bin' -ljulia 
> -ltestitnow`,Union{AS
> CIIString,UTF8String}["=C:=C:\\Users\\Jeremy\\AppData\\Local\\Julia-0.4.1","ADMSXMLBINDIR=C:\\Program
>  
> Files (x86)\\Qucs\
>
> \bin","ADS_LICENSE_FILE=6609@localhost","ALLUSERSPROFILE=C:\\ProgramData","APPDATA=C:\\Users\\Jeremy\\AppData\\Roaming",
> "ASCOBINDIR=C:\\Program Files 
> (x86)\\Qucs\\bin","CommonProgramFiles=C:\\Program Files\\Common 
> Files","CommonProgramFiles
> (x86)=C:\\Program Files (x86)\\Common 
> Files","CommonProgramW6432=C:\\Program Files\\Common 
> Files","COMPUTERNAME=JBEJANIN
>
> ","ComSpec=C:\\Windows\\system32\\cmd.exe","FLEXLM_TIMEOUT=200","FP_NO_HOST_CHECK=NO","HOME=C:\\Users\\Jeremy","HOME
>
> DRIVE=C:","HOMEPATH=\\Users\\Jeremy","LM_LICENSE_FILE=6065@lka-ic-01;6065@localhost","LOCALAPPDATA=C:\\Users\\Jeremy\\Ap
> pData\\Local","LOGONSERVER=JBEJANIN","MATLAB_HOME=C:\\Program 
> Files\\MATLAB\\R2015a","MOZ_PLUGIN_PATH=C:\\PROGRAM FI
> LES (X86)\\FOXIT SOFTWARE\\FOXIT 
> READER\\plugins\\","NIDAQmxSwitchDir=C:\\Program Files (x86)\\National 
> Instruments\\NI-
> DAQ\\Switch\\","NUMBER_OF_PROCESSORS=4","OCTAVEDIR=C:\\Program Files 
> (x86)\\Qucs\\share\\qucs\\octave","OPENBLAS_MAIN_FR
>
> EE=1","OS=Windows_NT","Path=C:\\Users\\Jeremy\\AppData\\Local\\Julia-0.4.1\\bin;C:\\Users\\Jeremy\\AppData\\Local\\Julia
> -0.4.1\\bin\\..\\Git\\bin;C:\\Users\\Jeremy\\AppData\\Local\\Julia-0.4.1\\bin\\..\\Git\\usr\\bin;C:\\Program
>  
> Files (x86)
> \\SumatraPDF;C:\\Program Files (x86)\\NVIDIA 
> Corporation\\PhysX\\Common;C:\\ProgramData\\Oracle\\Java\\javapath;C:\\Prog
> ram Files 
> (x86)\\Qucs\\bin;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\System32\\Wbem;C:\\Windows\\System32\\WindowsP
> owerShell\\v1.0\\;D:\\Programs\\MiKTeX 
> 2.9\\miktex\\bin\\x64\\;C:\\Users\\Jeremy\\AppData\\Local\\Julia-0.4.1\\bin;C:\\P
> rogram Files\\MATLAB\\R2015a\\runtime\\win64;C:\\Program 
> Files\\MATLAB\\R2015a\\bin;D:\\Programs\\Git\\cmd;D:\\Programs\
> \Git\\usr\\bin;D:\\Programs\\Vim\\vim74;C:\\Program Files 
> (x86)\\FastFieldSolvers\\FastHenry2\\Utilities;C:\\Program Fil
> es 
> (x86)\\FastFieldSolvers\\FastCap2\\Utilities;C:\\Users\\Jeremy\\AppData\\Local\\atom\\bin;;C:\\Users\\Jeremy\\.julia\
>
> \v0.4\\WinRPM\\deps\\usr\\x86_64-w64-mingw32\\sys-root\\mingw\\bin;C:\\Users\\Jeremy\\.julia\\v0.4\\WinRPM\\deps\\usr\\x
>
> 

Re: [julia-users] "shrink wrap" overallocations?

2015-11-25 Thread Seth
Ah, yes - it's coming 
full-circle. https://github.com/JuliaLang/julia/issues/2879 - I must have 
subconsciously remembered your use of the term "shrink wrap".

On Wednesday, November 25, 2015 at 2:20:45 PM UTC-8, Stefan Karpinski wrote:
>
> I don't think we have an API for this – I would suggest that 
> sizehint!(c,n) where n is smaller than the current size of the collection c 
> should do this, if someone wants to take a crack at it.
>
> On Wed, Nov 25, 2015 at 5:02 PM, Seth  > wrote:
>
>> Inspired by a discussion here: 
>> https://github.com/JuliaLang/julia/issues/14112#issuecomment-159715454, 
>> it might be nice to be able to reduce the amount of "padded" 
>> (allocated-but-not-yet-used) memory for dynamic structures in certain cases 
>> where we're not planning to expand the structures significantly (and can 
>> tolerate allocation delays if we do).
>>
>> Does a function that does something like this exist already?
>>
>
>

Re: [julia-users] "shrink wrap" overallocations?

2015-11-25 Thread Stefan Karpinski
I've labelled that as "up for grabs" and "intro" since it should be pretty
straightforward to do.

On Wed, Nov 25, 2015 at 5:33 PM, Seth  wrote:

> Ah, yes - it's coming full-circle.
> https://github.com/JuliaLang/julia/issues/2879 - I must have
> subconsciously remembered your use of the term "shrink wrap".
>
> On Wednesday, November 25, 2015 at 2:20:45 PM UTC-8, Stefan Karpinski
> wrote:
>>
>> I don't think we have an API for this – I would suggest that
>> sizehint!(c,n) where n is smaller than the current size of the collection c
>> should do this, if someone wants to take a crack at it.
>>
>> On Wed, Nov 25, 2015 at 5:02 PM, Seth  wrote:
>>
>>> Inspired by a discussion here:
>>> https://github.com/JuliaLang/julia/issues/14112#issuecomment-159715454,
>>> it might be nice to be able to reduce the amount of "padded"
>>> (allocated-but-not-yet-used) memory for dynamic structures in certain cases
>>> where we're not planning to expand the structures significantly (and can
>>> tolerate allocation delays if we do).
>>>
>>> Does a function that does something like this exist already?
>>>
>>
>>


[julia-users] Re: pmap - intermingled output from workers on v0.4

2015-11-25 Thread 'Greg Plowman' via julia-users

Thanks for your reply.

In my view it is natural, that the order of the "output" (print statements) 
> is intermingled, as the code runs in parallel.


Yes, I agree. But I'd like to make sure we're talking about the same level 
of intermingledness (is this a new word?)
Firstly I don't really understand parallel processing, output streams, 
switching etc.
But when I first starting using Julia for parallel sims (Julia v0.3) I was 
initially surprised that output from each worker was NOT intermingled, in 
the sense that each print statement from a worker was delivered to the 
master process console "atomically". i.e. there were discreet lines on the 
console each wholly from a single worker.
Sure, the order of the lines depended on the speed of the processor, the 
amount of work to do etc.
After a while, I just assumed this was either magic, or there was some kind 
of queuing system with locking or similar.
In any case, I didn't really think about it until I started using Julia 
v0.4 where output lines are sometimes not discrete and sometimes delayed.

Here's an example of output:
 
 ...
 From worker 3:  Completed random trial 69
 From worker 3:  Starting random trial 86 with 100 games
 From worker 5:  Starting random trial 87 with 100 games
 From worker 2:  Completed random trial 70
 From worker 2:  Starting random trial 88 with 100 games
 From worker 27: Starting random trial 89 with 100 games
 From worker 21: Completed random trial  From worker 22: Starting 
random trial 90 with 100 games
 From worker 23: Starting random trial 93 with 100 games
 From worker 21: 81
 From worker 19: Starting random trial 91 with 100 games
 From worker 14: Starting random trial 96 with 100 games
 From worker 4:  Completed random trial 82
 From worker 4:  Starting random trial 98 with 100 games
 From worker 24: Completed random trial  From worker 26: Completed 
random trial 76
 From worker 25: Completed random trial 80
 From worker 24: 85
 From worker 22: Completed random trial 90
 From worker 3:  Completed random trial 86
 From worker 8:  Completed random trial  From worker 9:  Starting 
random trial 94 with 100 games
 From worker 8:  78
 From worker 3:  Starting random trial 99 with 100 games
 From worker 27: Completed random trial  From worker 29: Starting 
random trial 92 with 100 games
 From worker 28: Starting random trial 95 with 100 games
 From worker 27: 89
 From worker 2:  Completed random trial 88
 From worker 2:  Starting random trial 100 with 100 games
 From worker 23: Completed random trial 93
 From worker 29: Completed random trial 92
 From worker 28: Completed random trial 95
 From worker 14: Completed random trial  From worker 16: Completed 
random trial 72
 From worker 15: Completed random trial 75
 From worker 20: Completed random trial 79
 From worker 17: Completed random trial 83
 From worker 18: Completed random trial 84
 From worker 19: Completed random trial 91
 From worker 14: 96
 From worker 4:  Completed random trial 98
 From worker 9:  Completed random trial 94
 From worker 3:  Completed random trial 99
 From worker 10: Completed random trial  From worker 11: Completed 
random trial 65
 From worker 12: Completed random trial 66
 From worker 13: Completed random trial 71
 From worker 10: 77  From worker 11: Starting random trial 97 with 
100 games
 From worker 10:
 From worker 2:  Completed random trial 100
 From worker 5:  Completed random trial  From worker 6:  Completed 
random trial 73
 From worker 7:  Completed random trial 74
 From worker 5:  87
 From worker 11: Completed random trial 97


Again I have no idea how these thing work, but here's code from Julia v0.3 
(multi.jl) 

 if isa(stream, AsyncStream)
let wrker = w
# redirect console output from workers to the client's stdout:
@async begin
while !eof(stream)
line = readline(stream)
print("\tFrom worker $(wrker.id):\t$line")
end
end
end
end


And equivalent code from Julia v0.4:

function redirect_worker_output(ident, stream)
@schedule while !eof(stream)
line = readline(stream)
if startswith(line, "\tFrom worker ")
# STDOUT's of "additional" workers started from an initial 
worker on a host are not available
# on the master directly - they are routed via the initial 
worker's STDOUT.
print(line)
else
print("\tFrom worker $(ident):\t$line")
end
end
end


It seems we've gone from @async to @schedule.
Would this make a difference?



[julia-users] ANN: SecureSessions.jl

2015-11-25 Thread jock . lawrie
Hi all,

I'm pleased to release a beta version of SecureSessions.jl 
.

It's a first attempt at providing secure sessions for web apps written in 
Julia.
The README describes the functionality, the API and links to some examples.

The key message at this point is that the package hasn't been scrutinised 
by a security professional. Therefore:
1) Use at your own risk
2) I'm seeking advice on how to harden the implementation (I'm happy to do 
the work). Suggestions very welcome.

Cheers,
Jock

p.s. Not sure why the Travis builds are failing...build and tests pass on 
my machine with 100% code coverage. Any ideas?



Re: [julia-users] Memory-efficient / in-place allocation of array subsets?

2015-11-25 Thread Gord Stephen
Ok, good to know. Thanks!

Gord

On Tuesday, November 24, 2015 at 9:38:18 PM UTC-5, Tim Holy wrote:
>
> On Tuesday, November 24, 2015 04:53:11 PM Gord Stephen wrote: 
> > The problem arises when I need to repeat the process more than once - 
> when 
> > the SubArrays become out of scope they're garbage collected but A isn't 
> - a 
> > second call to demo2() then exceeds available memory. 
>
> That sounds like a bug. Indeed, I can (sort of) reproduce this: 
>
> julia> function foo() 
>A = rand(3,5) 
>B = sub(A, 1:2, 3:4) 
>finalizer(A, A->@schedule(println("GCed A"))) 
>B 
>end 
> foo (generic function with 1 method) 
>
> julia> B = foo() 
> 2x2 
> SubArray{Float64,2,Array{Float64,2},Tuple{UnitRange{Int64},UnitRange{Int64}},1}:
>  
>
>  0.583567  0.719624 
>  0.60663   0.739911 
>
> julia> B = 0 
> 0 
>
> julia> gc() 
>
>
> But call gc() immediately again, and it gets cleaned up: 
>
> julia> gc() 
> GCed A 
>
> I filed an issue: 
> https://github.com/JuliaLang/julia/issues/14127 
>
> --Tim 
>
>

[julia-users] Why does using behave the way it does with regards to remote workers?

2015-11-25 Thread Lyndon White
Hi all, a discussion on the IRC channel prompts me to ask.
>From a design point of view why does `using` behave the way it does to 
remote workers?

Assume some module called `FooModule` which exports `bar`

`using FooModule`
locally loads the module, and brings it into scope -- so running locally 
`bar` works, as does running locally `FooModule.bar`
on all workers it loads the module, but does not bring it into scope so 
running on worker 2 (for example) `bar` give an unknown function error, but 
running `FooModule.bar` works

`@everywhere using FooModule` causes it to be loaded locally and thus on 
all workers, and also to be loaded on all workers and brought into scope.
Thus you get warnings about replacing modules as is loaded twice, but 
everything is in scope, so running on worker 2 (again for example) `bar` 
works.

Why have this behavior?
Wouldn't it be cleaner is using only acted locally?



Discussion log from IRC follows for further context:

Regards
Frames



 

> Day changed to 26 Nov 2015
> 05:37 < Travisty> Is there an easy way to use a single file to load code 
> for
>   multiple workers /and/ act as a driver script? Currently 
> I
>   have been putting an “@everywhere begin … end” block in 
> the
>   script to make sure that all the workers have the 
> appropriate
>   functions defined.
> 05:38 < Travisty> But when I say “using SomeModule” inside the @everywhere
>   block, I get a bunch of warnings about the module being
>   replaced, which makes me think this might not be the 
> desired
>   approach
> 08:39 < Frames> @Travisty it took me quiet a while to work out how to do 
> using
> everywhere
> 08:40 < Frames> So let me explain, that the knowledge may be shared
> 08:40 < Travisty> Frames: sure, thanks
> 08:40 < Travisty> (I should mention that what I’ve done works, but it 
> gives the
>   annoying warning about the modules being replaced)
> 08:42 < Frames> If you run `using FooModule` on the main window, then
> `FooModule` is loaded both locally and in all workers, and
> furthermore locally (only) all exported functions are 
> brought
> into scope. (continued in next messagee...)
> 08:43 < Frames> So if `FooModule` exports `bar`, locally you can run 
> `bar`, but
> on all workers you much run `FooModule.bar` (which also 
> works
> locally)
> 08:47 < Frames> On the other hand if you run `@everywhere using
> `FooModule.bar`, then the stuff that happens when you run
> `using FooModule` happens (as it runs locally and on all
> workers), but also Simultainiously *!* it tiggers `using
> FooModule` as a local command on the remote workers bring 
> it
> into scope so now `bar` can run everywhere.  (But this 
> does not
> trigger all other remote workers to again get the remote 
> load
> but
> 08:47 < Frames> do not bring into scope, as under the default cluster 
> manager
> topology workers can't see each other)
> 08:49 < Frames> The inportant footnote *!* is that this can trigger a 
> (mostly
> harmless) Race condition with the modules all trying to
> precompile and cache the module at the same time. This 
> breaks
> some modules (yet to work out which or exactly what goes 
> on)
> 08:51 < Travisty> Frames: Yeah, I read about the behaviour of using
> 08:51 < Travisty> is there an argument for it behaving this way?
> 08:51 < Travisty> I mean, importing the module on the workers but not 
> bringing
>   the exported names into the namespace?
> 08:52 < Travisty> Is there a way to get the workers to also import the
>   namespace of the module?
> 08:52 < Frames> The best way I have for doing this is to first run `import
> FooModule` then `@everywhere using FooModule`, which 
> causes it
> to be loaded locally, then it is not recompiled again as 
> it is
> already compiled locally, then `@everywhere using 
> FooModule`
> brings it into scope everywhere. Thus you get no warnings 
> as it
> has Not being loaded, as using doesn't trigger locally thus
> does not trigger the remote loads in the
> 08:52 < Frames> workers (WWhich may itself be a bug_
> 08:52 < Frames> )
> 08:53 < Travisty> ah
> 08:53 < Travisty> so import on the driver and then @everywhere using
> 08:53 < Frames> --- I am really not sure why it is that way.
> 08:53 < Frames> ^Yes. Import then @everwhere using.
> 08:54 < Frames> I was trying to make a macro to do it, but it does not 
> seem to
> like.
> 08:54 < Frames> Shall I copy and post this whole discussion to the mailing 
> list?
> 08:54 < Travisty> 

[julia-users] Re: Why does using behave the way it does with regards to remote workers?

2015-11-25 Thread Steven G. Johnson
Right now, the best way to bring a module into scope on all workers is

import FooModule
@everywhere using FooModule 


the first line loads it globally does not bring FooModule's exported 
symbols into the local namespaces, whereas the second line imports the 
exported symbols.

I agree that this is a bit confusing; this is a known issue. 
 See https://github.com/JuliaLang/julia/issues/12381


[julia-users] Array element assignment in for loop

2015-11-25 Thread Kristoffer Carlsson
Return k from the first function and you will see that things change.

If you make a function that actually doesn't do anything, you shouldn't be 
surprised if the compiler gives you a function that doesn't do anything. :)

[julia-users] Re: Precompilation and functions with keyword arguments

2015-11-25 Thread Dan
Aha! tested the same sequence in a non-REPL run and the 1st and 2nd 
runtimes after a `precompile` are the same.

On Wednesday, November 25, 2015 at 10:23:04 PM UTC+2, Dan wrote:
>
> I've managed to reproduce the timing peculiarities you described, with a 
> different function. In fact, I get the same first run slower behavior even 
> with regular functions with -no- keywords. It is almost as if `precompile` 
> is not doing the entire job.
> Have you tried, just testing the basic precompile-1st run-2nd run timing 
> of a no keyword generic function? 
>
> On Tuesday, November 24, 2015 at 9:43:41 PM UTC+2, Tim Holy wrote:
>>
>> I've been experimenting further with SnoopCompile and Immerse/Gadfly, 
>> trying to 
>> shave off more of the time-to-first-plot. If I pull out all the stops 
>> (using the 
>> "userimg.jl" strategy), I can get it down to about 2.5 seconds. However, 
>> doing 
>> the same plot a second time is about 0.02s. This indicates that despite 
>> my 
>> efforts, there's still a lot that's not being cached. 
>>
>> About 0.3s of that total (not much, but it's all I have data on) can be 
>> observed via snooping as re-compilation of functions that you might 
>> imagine 
>> should have been precompiled. The biggest offenders are all functions 
>> with 
>> keyword arguments. In miniature, I think you can see this here: 
>>
>> julia> function foo(X; thin=true) 
>>svdfact(X) 
>>end 
>> foo (generic function with 1 method) 
>>
>> # Before compiling this, let's make sure the work compiling foo isn't 
>> hidden 
>> # by other compilation needs: 
>> julia> A = rand(3,3) 
>> 3x3 Array{Float64,2}: 
>>  0.570780.33557   0.56497   
>>  0.0679035  0.944406  0.816098 
>>  0.0922775  0.404697  0.0900726 
>>
>> julia> svdfact(A) 
>> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>>  -0.507226   0.861331   0.0288001 
>>  -0.825227  -0.475789  -0.304344 
>>  -0.248438  -0.178138   0.952127 , 
>> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
>> Array{Float64,2}: 
>>  -0.248222  -0.707397  -0.661797 
>>   0.873751  -0.458478   0.162348 
>>   0.418264   0.537948  -0.731893) 
>>
>> # OK, let's precompile foo 
>> julia> @time precompile(foo, (Matrix{Float64},)) 
>>   0.000469 seconds (541 allocations: 35.650 KB) 
>>
>> julia> @time foo(A) 
>>   0.001174 seconds (18 allocations: 3.063 KB) 
>> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>>  -0.507226   0.861331   0.0288001 
>>  -0.825227  -0.475789  -0.304344 
>>  -0.248438  -0.178138   0.952127 , 
>> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
>> Array{Float64,2}: 
>>  -0.248222  -0.707397  -0.661797 
>>   0.873751  -0.458478   0.162348 
>>   0.418264   0.537948  -0.731893) 
>>
>> # Note the 2nd call is 10x faster, despite precompilation 
>> julia> @time foo(A) 
>>   0.000164 seconds (18 allocations: 3.063 KB) 
>> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>>  -0.507226   0.861331   0.0288001 
>>  -0.825227  -0.475789  -0.304344 
>>  -0.248438  -0.178138   0.952127 , 
>> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
>> Array{Float64,2}: 
>>  -0.248222  -0.707397  -0.661797 
>>   0.873751  -0.458478   0.162348 
>>   0.418264   0.537948  -0.731893) 
>>
>> # Note adding a keyword argument to the call causes a further 10x 
>> slowdown... 
>> julia> @time foo(A; thin=true) 
>>   0.014787 seconds (3.36 k allocations: 166.622 KB) 
>> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>>  -0.507226   0.861331   0.0288001 
>>  -0.825227  -0.475789  -0.304344 
>>  -0.248438  -0.178138   0.952127 , 
>> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
>> Array{Float64,2}: 
>>  -0.248222  -0.707397  -0.661797 
>>   0.873751  -0.458478   0.162348 
>>   0.418264   0.537948  -0.731893) 
>>
>> # ...but only for the first call 
>> julia> @time foo(A; thin=true) 
>>   0.000209 seconds (19 allocations: 3.141 KB) 
>> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>>  -0.507226   0.861331   0.0288001 
>>  -0.825227  -0.475789  -0.304344 
>>  -0.248438  -0.178138   0.952127 , 
>> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
>> Array{Float64,2}: 
>>  -0.248222  -0.707397  -0.661797 
>>   0.873751  -0.458478   0.162348 
>>   0.418264   0.537948  -0.731893) 
>>
>>
>> Obviously the times here don't add up to much, but for a project the size 
>> of 
>> Gadfly it might matter. 
>>
>> I should add that 
>> precompile(foo, (Vector{Any}, Matrix{Float64})) 
>> doesn't seem to do anything useful. 
>>
>> Any ideas? 
>>
>> Best, 
>> --Tim 
>>
>

[julia-users] "shrink wrap" overallocations?

2015-11-25 Thread Seth
Inspired by a discussion 
here: https://github.com/JuliaLang/julia/issues/14112#issuecomment-159715454, 
it might be nice to be able to reduce the amount of "padded" 
(allocated-but-not-yet-used) memory for dynamic structures in certain cases 
where we're not planning to expand the structures significantly (and can 
tolerate allocation delays if we do).

Does a function that does something like this exist already?


[julia-users] Re: Precompilation and functions with keyword arguments

2015-11-25 Thread Dan
Further investigation suggests recompilation is happening. This raises 
questions:
1) why is there recompilation?
2) why are the memory allocations associated not reported?
3) is there some memory leak?

The following run suggests the recompilation:

  | | |_| | | | (_| |  |  Version 0.4.2-pre+15 (2015-11-20 12:12 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit c4bbb89* (5 days old release-0.4)
|__/   |  x86_64-linux-gnu


julia> testfunction2(n) = (x = 0.0; for i=1:n x += sin(i) ; end ; x)
testfunction2 (generic function with 1 method)


julia> @time precompile(testfunction2,(Int64,))
  0.004197 seconds (2.84 k allocations: 146.004 KB)


julia> a = Base._dump_function(testfunction2,(Int64,),false,false,true,false
)
"\ndefine double @julia_testfunction2_21229(i64) {\ntop:\n  %1 = icmp sgt 
i64 %0, 0\n  %2 = select i1 %1, i64 %0, i64 0\n  %3 = icmp eq i64 %2, 0\n 
 br i1 %3, label %L3, label %L\n\nL:   
 ; preds = %pass, %top\n  %x.0 = phi double [ %11, %pass ], [ 
0.00e+00, %top ]\n  %\"#s1.0\" = phi i64 [ %10, %pass ], [ 1, %top ]\n 
 %4 = sitofp i64 %\"#s1.0\" to double\n  %5 = call double inttoptr (i64 
139659687619792 to double (double)*)(double inreg %4)\n  %6 = fcmp uno 
double %4, 0.00e+00\n  %7 = fcmp ord double %5, 0.00e+00\n  %8 = or 
i1 %7, %6\n  br i1 %8, label %pass, label %fail\n\nfail:   
  ; preds = %L\n  %9 = load %jl_value_t** 
@jl_domain_exception, align 8\n  call void 
@jl_throw_with_superfluous_argument(%jl_value_t* %9, i32 1)\n 
 unreachable\n\npass: ; preds = 
%L\n  %10 = add i64 %\"#s1.0\", 1\n  %11 = fadd double %x.0, %5\n  %12 = 
icmp eq i64 %\"#s1.0\", %2\n  br i1 %12, label %L3, label %L\n\nL3: 
  ; preds = %pass, %top\n  %x.1 = phi 
double [ 0.00e+00, %top ], [ %11, %pass ]\n  ret double %x.1\n}\n"


julia> @time testfunction2(100_000)
  0.005485 seconds (5 allocations: 176 bytes)
1.84103630354


julia> a = Base._dump_function(testfunction2,(Int64,),false,false,true,false
)
"\ndefine double @julia_testfunction2_21293(i64) {\ntop:\n  %1 = icmp sgt 
i64 %0, 0\n  %2 = select i1 %1, i64 %0, i64 0\n  %3 = icmp eq i64 %2, 0\n 
 br i1 %3, label %L3, label %L\n\nL:   
 ; preds = %pass, %top\n  %x.0 = phi double [ %11, %pass ], [ 
0.00e+00, %top ]\n  %\"#s1.0\" = phi i64 [ %10, %pass ], [ 1, %top ]\n 
 %4 = sitofp i64 %\"#s1.0\" to double\n  %5 = call double inttoptr (i64 
139659687619792 to double (double)*)(double inreg %4)\n  %6 = fcmp uno 
double %4, 0.00e+00\n  %7 = fcmp ord double %5, 0.00e+00\n  %8 = or 
i1 %7, %6\n  br i1 %8, label %pass, label %fail\n\nfail:   
  ; preds = %L\n  %9 = load %jl_value_t** 
@jl_domain_exception, align 8\n  call void 
@jl_throw_with_superfluous_argument(%jl_value_t* %9, i32 1)\n 
 unreachable\n\npass: ; preds = 
%L\n  %10 = add i64 %\"#s1.0\", 1\n  %11 = fadd double %x.0, %5\n  %12 = 
icmp eq i64 %\"#s1.0\", %2\n  br i1 %12, label %L3, label %L\n\nL3: 
  ; preds = %pass, %top\n  %x.1 = phi 
double [ 0.00e+00, %top ], [ %11, %pass ]\n  ret double %x.1\n}\n"


As can be seen, the first version is julia_testfunction2_21229 and the 
second is julia_testfunction2_21293. The signatures are the same.

Disclaimer - I'm very unfamiliar with the Julia codegen code (and much of 
the rest).
Intuition: It might be related to the following snippet from src/codegen.cpp
:

extern "C" DLLEXPORT
void *jl_get_llvmf(jl_function_t *f, jl_tupletype_t *tt, bool getwrapper)
{
:
:
:
if (sf->linfo->functionObject != NULL) {
// found in the system image: force a recompile
Function *llvmf = (Function*)sf->linfo->functionObject;
if (llvmf->isDeclaration()) {
sf->linfo->specFunctionObject = NULL;
sf->linfo->functionObject = NULL;
}
}


Note the force a recompile bit.


On Wednesday, November 25, 2015 at 10:23:04 PM UTC+2, Dan wrote:
>
> I've managed to reproduce the timing peculiarities you described, with a 
> different function. In fact, I get the same first run slower behavior even 
> with regular functions with -no- keywords. It is almost as if `precompile` 
> is not doing the entire job.
> Have you tried, just testing the basic precompile-1st run-2nd run timing 
> of a no keyword generic function? 
>
> On Tuesday, November 24, 2015 at 9:43:41 PM UTC+2, Tim Holy wrote:
>>
>> I've been experimenting further with SnoopCompile and Immerse/Gadfly, 
>> trying to 
>> shave off more of the time-to-first-plot. If I pull out all the stops 
>> (using the 
>> "userimg.jl" strategy), I can get it down to about 2.5 seconds. However, 
>> doing 
>> the same plot a second time is about 0.02s. This indicates that despite 
>> my 
>> efforts, there's 

Re: [julia-users] "shrink wrap" overallocations?

2015-11-25 Thread Stefan Karpinski
I don't think we have an API for this – I would suggest that sizehint!(c,n)
where n is smaller than the current size of the collection c should do
this, if someone wants to take a crack at it.

On Wed, Nov 25, 2015 at 5:02 PM, Seth  wrote:

> Inspired by a discussion here:
> https://github.com/JuliaLang/julia/issues/14112#issuecomment-159715454,
> it might be nice to be able to reduce the amount of "padded"
> (allocated-but-not-yet-used) memory for dynamic structures in certain cases
> where we're not planning to expand the structures significantly (and can
> tolerate allocation delays if we do).
>
> Does a function that does something like this exist already?
>


[julia-users] Array element assignment in for loop

2015-11-25 Thread Paul Thompson
Hi:

I've noticed that there is a performance cost associated with assigning a 
value to an array in a loop, compared to just using it for calculation. 
Here are some examples to demonstrate:

function mytest(a::Array{Float64,1},b::Array{Float64,1})
k=0.0
for i=1:length(a)
@inbounds k+=b[i]
end

nothing 
end

function mytest2(a::Array{Float64,1},b::Array{Float64,1})

for i=1:length(a)
@inbounds a[i]+=b[i]
end

nothing 
end

function mytest3(a::Array{Float64,1},b::Array{Float64,1})

for i=1:length(a)
@inbounds a[1]+=b[i]
end

nothing 
end

function mytest4(a::Array{Float64,1},b::Array{Float64,1})
   
for i=1:length(a)
@inbounds setindex!(a,b[i]+a[i],i)
end

nothing

end

c=rand(Float64,10); d=rand(Float64,10);

@time mytest(c,d);
@time mytest2(c,d);
@time mytest3(c,d);
@time mytest4(c,d);

0.03 seconds (4 allocations: 160 bytes)
0.000164 seconds (4 allocations: 160 bytes)
0.000183 seconds (4 allocations: 160 bytes)
0.000139 seconds (4 allocations: 160 bytes)


At first, I thought that just accessing array was more expensive than a 
single variable like k in the first example, but using the value of b[i] at 
each iteration doesn't seem to have much overhead. I expect that the 
difference between mytest2 and mytest3 is because of some caching issue 
under the hood since the beginning of one array is repeatedly indexed, 
while the location in the second array changes. I'm not sure why using 
setindex! is faster than bracketed indexes.

Anyone have any ideas of what is going on here? It is still super fast, but 
in some performance critical code where I am making multiple array 
assignments per iteration, I wouldn't mind find avoiding the expense above 
if I can.

Thanks!

Paul



[julia-users] Re: Array element assignment in for loop

2015-11-25 Thread Paul Thompson
Haha silly mistake! Thank you

On Wednesday, November 25, 2015 at 3:16:21 PM UTC-5, Kristoffer Carlsson 
wrote:
>
> Return k from the first function and you will see that things change.
>
> If you make a function that actually doesn't do anything, you shouldn't be 
> surprised if the compiler gives you a function that doesn't do anything. :)
>


Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-25 Thread Andrei
@Joaquim Would it be possible for you to upgrade or donwgrade your CUDA
version?

Meanwhile, we are still waiting for other people with Windows or Mac :D

On Wed, Nov 25, 2015 at 6:35 PM, Sergio Muniz  wrote:

> It works fine here, on Ubuntu 14.04 and CUDA 6.5.
>
> Cheers,
> [S].
>
>
>
>
> On Tuesday, November 24, 2015 at 9:30:32 PM UTC-2, Joaquim Masset Lacombe
> Dias Garcia wrote:
>>
>> Interesting, both my machines (windows and mac) have CUDA 7.0, possibly
>> thats the issue, since the C code in
>> https://github.com/JuliaGPU/CURAND.jl/issues/3#issuecomment-159319580
>> fails in both.
>>
>> If some linux user could test this version, our statistics would be more
>> complete. I will try 6.5 and 7.5
>>
>> Em terça-feira, 24 de novembro de 2015 21:21:41 UTC-2, Tim Holy escreveu:
>>>
>>> 6.5
>>>
>>> --Tim
>>>
>>> On Wednesday, November 25, 2015 01:37:22 AM Andrei wrote:
>>> > On Tue, Nov 24, 2015 at 8:03 PM, Kristoffer Carlsson <
>>> kcarl...@gmail.com>
>>> > wrote:
>>> > > The original code in the OP fails for me
>>> >
>>> > Yes, this is expected behavior: for convenience, CURAND.jl creates
>>> default
>>> > random number generator, which obviously becomes invalid after
>>> > `device_reset()`. At the same time, my last code snippet creates new
>>> and
>>> > explicit RNG, which fixes the issue on Linux.
>>> >
>>> > The problem is that on some platforms even creating new generator
>>> doesn't
>>> > help, so I'm trying to understand the difference.
>>> >
>>> > @Tim, @Kristoffer, could you also specify CUDA version in use, please?
>>>
>>>


[julia-users] Re: Precompilation and functions with keyword arguments

2015-11-25 Thread Dan
I've managed to reproduce the timing peculiarities you described, with a 
different function. In fact, I get the same first run slower behavior even 
with regular functions with -no- keywords. It is almost as if `precompile` 
is not doing the entire job.
Have you tried, just testing the basic precompile-1st run-2nd run timing of 
a no keyword generic function? 

On Tuesday, November 24, 2015 at 9:43:41 PM UTC+2, Tim Holy wrote:
>
> I've been experimenting further with SnoopCompile and Immerse/Gadfly, 
> trying to 
> shave off more of the time-to-first-plot. If I pull out all the stops 
> (using the 
> "userimg.jl" strategy), I can get it down to about 2.5 seconds. However, 
> doing 
> the same plot a second time is about 0.02s. This indicates that despite my 
> efforts, there's still a lot that's not being cached. 
>
> About 0.3s of that total (not much, but it's all I have data on) can be 
> observed via snooping as re-compilation of functions that you might 
> imagine 
> should have been precompiled. The biggest offenders are all functions with 
> keyword arguments. In miniature, I think you can see this here: 
>
> julia> function foo(X; thin=true) 
>svdfact(X) 
>end 
> foo (generic function with 1 method) 
>
> # Before compiling this, let's make sure the work compiling foo isn't 
> hidden 
> # by other compilation needs: 
> julia> A = rand(3,3) 
> 3x3 Array{Float64,2}: 
>  0.570780.33557   0.56497   
>  0.0679035  0.944406  0.816098 
>  0.0922775  0.404697  0.0900726 
>
> julia> svdfact(A) 
> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>  -0.507226   0.861331   0.0288001 
>  -0.825227  -0.475789  -0.304344 
>  -0.248438  -0.178138   0.952127 , 
> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
> Array{Float64,2}: 
>  -0.248222  -0.707397  -0.661797 
>   0.873751  -0.458478   0.162348 
>   0.418264   0.537948  -0.731893) 
>
> # OK, let's precompile foo 
> julia> @time precompile(foo, (Matrix{Float64},)) 
>   0.000469 seconds (541 allocations: 35.650 KB) 
>
> julia> @time foo(A) 
>   0.001174 seconds (18 allocations: 3.063 KB) 
> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>  -0.507226   0.861331   0.0288001 
>  -0.825227  -0.475789  -0.304344 
>  -0.248438  -0.178138   0.952127 , 
> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
> Array{Float64,2}: 
>  -0.248222  -0.707397  -0.661797 
>   0.873751  -0.458478   0.162348 
>   0.418264   0.537948  -0.731893) 
>
> # Note the 2nd call is 10x faster, despite precompilation 
> julia> @time foo(A) 
>   0.000164 seconds (18 allocations: 3.063 KB) 
> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>  -0.507226   0.861331   0.0288001 
>  -0.825227  -0.475789  -0.304344 
>  -0.248438  -0.178138   0.952127 , 
> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
> Array{Float64,2}: 
>  -0.248222  -0.707397  -0.661797 
>   0.873751  -0.458478   0.162348 
>   0.418264   0.537948  -0.731893) 
>
> # Note adding a keyword argument to the call causes a further 10x 
> slowdown... 
> julia> @time foo(A; thin=true) 
>   0.014787 seconds (3.36 k allocations: 166.622 KB) 
> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>  -0.507226   0.861331   0.0288001 
>  -0.825227  -0.475789  -0.304344 
>  -0.248438  -0.178138   0.952127 , 
> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
> Array{Float64,2}: 
>  -0.248222  -0.707397  -0.661797 
>   0.873751  -0.458478   0.162348 
>   0.418264   0.537948  -0.731893) 
>
> # ...but only for the first call 
> julia> @time foo(A; thin=true) 
>   0.000209 seconds (19 allocations: 3.141 KB) 
> Base.LinAlg.SVD{Float64,Float64,Array{Float64,2}}(3x3 Array{Float64,2}: 
>  -0.507226   0.861331   0.0288001 
>  -0.825227  -0.475789  -0.304344 
>  -0.248438  -0.178138   0.952127 , 
> [1.4844598265207638,0.5068781079827415,0.19995120630810712],3x3 
> Array{Float64,2}: 
>  -0.248222  -0.707397  -0.661797 
>   0.873751  -0.458478   0.162348 
>   0.418264   0.537948  -0.731893) 
>
>
> Obviously the times here don't add up to much, but for a project the size 
> of 
> Gadfly it might matter. 
>
> I should add that 
> precompile(foo, (Vector{Any}, Matrix{Float64})) 
> doesn't seem to do anything useful. 
>
> Any ideas? 
>
> Best, 
> --Tim 
>


Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-25 Thread Joaquim Masset Lacombe Dias Garcia
Wow! it seems that something is wrong in CUDA 7.0 !

I updated my MAC to cuda 7.5 and bith the C code and julia code worked fine!
This other guy:
http://stackoverflow.com/questions/33904554/curand-error-while-alternating-with-device-initialization-and-reset-in-cuda-7-0?noredirect=1#comment55573150_33904554

had the C code running smoothly on windows with CUDA 7.5

I will try the julia code on my windows notebook later this week with CUDA 
7.5

If somebody else could confirm the problem on curand in cuda 7.0, we could 
add that in CURAND.jl documentation.



Em quarta-feira, 25 de novembro de 2015 19:20:48 UTC-2, Andrei Zh escreveu:
>
> @Joaquim Would it be possible for you to upgrade or donwgrade your CUDA 
> version? 
>
> Meanwhile, we are still waiting for other people with Windows or Mac :D
>
> On Wed, Nov 25, 2015 at 6:35 PM, Sergio Muniz  > wrote:
>
>> It works fine here, on Ubuntu 14.04 and CUDA 6.5.
>>
>> Cheers,
>> [S].
>>
>>
>>
>>
>> On Tuesday, November 24, 2015 at 9:30:32 PM UTC-2, Joaquim Masset Lacombe 
>> Dias Garcia wrote:
>>>
>>> Interesting, both my machines (windows and mac) have CUDA 7.0, possibly 
>>> thats the issue, since the C code in 
>>> https://github.com/JuliaGPU/CURAND.jl/issues/3#issuecomment-159319580 
>>> fails in both.
>>>
>>> If some linux user could test this version, our statistics would be more 
>>> complete. I will try 6.5 and 7.5
>>>
>>> Em terça-feira, 24 de novembro de 2015 21:21:41 UTC-2, Tim Holy escreveu:

 6.5 

 --Tim 

 On Wednesday, November 25, 2015 01:37:22 AM Andrei wrote: 
 > On Tue, Nov 24, 2015 at 8:03 PM, Kristoffer Carlsson <
 kcarl...@gmail.com> 
 > wrote: 
 > > The original code in the OP fails for me 
 > 
 > Yes, this is expected behavior: for convenience, CURAND.jl creates 
 default 
 > random number generator, which obviously becomes invalid after 
 > `device_reset()`. At the same time, my last code snippet creates new 
 and 
 > explicit RNG, which fixes the issue on Linux. 
 > 
 > The problem is that on some platforms even creating new generator 
 doesn't 
 > help, so I'm trying to understand the difference. 
 > 
 > @Tim, @Kristoffer, could you also specify CUDA version in use, 
 please? 


>

[julia-users] escher fps

2015-11-25 Thread Yakir Gagnon


Hi!
I’m trying to have a plot of some data update continuously using Escher.jl. 
I thought one natural way would be to use the fps function from Reactive.jl. 
The following works fine for 10 frames per second:

using Winston
main(window) = lift(_ -> plot(rand(10)), fps(10))

But I get ERROR (unhandled task failure): push! called when another signal 
is still updating. when I use higher values (for example 20 fps). What am I 
doing wrong? How can I fix it? 

Thanks Shashi!
​


[julia-users] escher fps

2015-11-25 Thread Yakir Gagnon
Hi!
I'm trying to have a plot of some data update continuously using 
`Escher.jl`. I thought one natural way would be to use the `fps` function 
from `Reactive.jl`. The following works fine for 10 frames per second:
```julia
using Winston
main(window) = lift(_ -> plot(rand(10)), fps(10))
```
But I get `ERROR (unhandled task failure): push! called when another signal 
is still updating.` when I use higher values (for example 20 fps). What am 
I doing wrong? How can I fix it? 

Thanks Shashi!



Re: [julia-users] [ANN] more error-free transformations

2015-11-25 Thread Jeffrey Sarnoff
These are distinct operators that substitute directly for (+),(-),(*),(/) 
in situations where one wants to obtain more of mathematically true result 
than is usually available:

two = 2.0; sqrt2 = sqrt(2);
residualValueRoundedAway = Float64(sqrt(big(2)) - sqrt2) # 
-9.667293313452913e-17
 
mostSignficantPart, leastSignificantPart = eftSqrt(two)
mostSignificantPart ==  1.4142135623730951
leastSignificantPart == -9.667293313452912e-17 # we recover the residual 
value, itself at Float64 precision 

so we obtain the arithmetic result at twice the 'working' precision (in two 
parts, the mspart == the usual result).

exp1log2 = exp(1.0)*log(2.0);   
#  1.88416938536372
residualValueRoundedAway = Float64(exp(big(1))*log(big(2)) - exp1log2) # 
 8.146538547111741e-17

mostSignficantPart, leastSignificantPart = eftProd2( exp(1.0), log(2.0) )   
# (1.88416938536372, -8.177744937186283e-17)

--
These transformations have the additional benefit that the two parts are 
well separated, they do not overlap in the working precision.
So, in all cases, mostSignificantPart + leastSignificantPart == 
mostSignificantPart.  
They are as well separated as possible, without losing information.

These functions are well-suited to assisting the implementation of extended 
precision Floating Point math.
Another application (that, until otherwise informed, I'll say is from me) 
is to accelerate inline rounding:
  (RoundFast.jl , there to see 
how).

Assuming one had a Float64 unum-ish capability, a double-double float would 
extend the precision.
(Ultimately, all these parts should meld)


mostSigPart, leastSigPart = eftProd2( exp(1), log(2) )


On Wednesday, November 25, 2015 at 9:19:08 AM UTC-5, Tom Breloff wrote:
>
> Thanks Jeffrey. Can you expand on the specifics of the package?  What 
> would you say are the primary use cases? How does this differ from interval 
> arithmetic or Unums?
>
> On Wednesday, November 25, 2015, Jeffrey Sarnoff  > wrote:
>
>> ErrorFreeArith.jl  offers 
>> error-free transformations not (yet?) included in the ErrorFreeTransforms 
>> package by dsiem.
>>
>> These operations convey the usual arithmetic result accompanied by a 
>> residual value that is usually lost to rounding.
>> This gives the correct value at twice the working precision (correctly 
>> rounded for +,-,*,/; still 1/(1/x) = x or x ± ulp(x)).
>>
>>
>>
>>

Re: [julia-users] [ANN] more error-free transformations

2015-11-25 Thread Jeffrey Sarnoff

These are distinct operators that substitute directly for (+),(-),(*),(/) 
in situations where one wants to obtain more of mathematically true result 
than is usually available:

two = 2.0; sqrt2 = sqrt(2);
residualValueRoundedAway = Float64(sqrt(big(2)) - sqrt2) # 
-9.667293313452913e-17
 
mostSignficantPart, leastSignificantPart = eftSqrt(two)
mostSignificantPart ==  1.4142135623730951
leastSignificantPart == -9.667293313452912e-17 # we recover the residual 
value, itself at Float64 precision 

so we obtain the arithmetic result at twice the 'working' precision (in two 
parts, the mspart == the usual result).

exp1log2 = exp(1.0)*log(2.0);   
#  1.88416938536372
residualValueRoundedAway = Float64(exp(big(1))*log(big(2)) - exp1log2) # 
 8.146538547111741e-17

mostSignficantPart, leastSignificantPart = eftProd2( exp(1.0), log(2.0) )   
# (1.88416938536372, -8.177744937186283e-17)

--
These transformations have the additional benefit that the two parts are 
well separated, they do not overlap in the working precision.
So, in all cases, mostSignificantPart + leastSignificantPart == 
mostSignificantPart.  
They are as well separated as possible, without losing information.

These functions are well-suited to assisting the implementation of extended 
precision Floating Point math.
Another application (that, until otherwise informed, I'll say is from me) 
is to accelerate inline rounding:
  (RoundFast.jl , there to see 
how).

Assuming one had a Float64 unum-ish capability, a double-double float would 
extend the precision.
(Ultimately, all these parts should meld)


On Wednesday, November 25, 2015 at 9:19:08 AM UTC-5, Tom Breloff wrote:
>
> Thanks Jeffrey. Can you expand on the specifics of the package?  What 
> would you say are the primary use cases? How does this differ from interval 
> arithmetic or Unums?
>
> On Wednesday, November 25, 2015, Jeffrey Sarnoff  > wrote:
>
>> ErrorFreeArith.jl  offers 
>> error-free transformations not (yet?) included in the ErrorFreeTransforms 
>> package by dsiem.
>>
>> These operations convey the usual arithmetic result accompanied by a 
>> residual value that is usually lost to rounding.
>> This gives the correct value at twice the working precision (correctly 
>> rounded for +,-,*,/; still 1/(1/x) = x or x ± ulp(x)).
>>
>>
>>
>>

[julia-users] Re: Easiest way to get indexes of elements that satisfy a certain condition

2015-11-25 Thread Aleksandr Mikheev
Oh, thank you, didn't know that.


[julia-users] Easiest way to get indexes of elements that satisfy a certain condition

2015-11-25 Thread Aleksandr Mikheev
Hi all,

So imagine I have an array. For example:

s = [1 2 5 7 3 3]


Also, I have x = 4. And I would like to have an array of indexes 
of elements s that satisfy:

s .< x

 
In Fortran or MATLAB I would do something like this:

indx = 1:1:size(s) (or length(s) in MATLAB)

And then:

indx = pack(indx,s>x)

 
or 

indx=indx(s < x)

 
In Julia I tried something like:

indx = indx.*(s .< x)


But I had a matrix in output for some reason. So what is the easiest 
solution in this situation?



[julia-users] Best practice for editing official packages?

2015-11-25 Thread Eric Forgy
I'm still learning the ropes.

When I "Pkg.add" a Julia package, it produces a git repo in .julia. If I 
wanted to revise one of the packages, is it unadvised to modify it directly 
in the .julia repo if I ultimately intend to submit a PR?

My first instinct was to clone the package somewhere other than .julia and 
then "Pkg.clone" from there, but that seems a bit roundabout. Hence my 
question.

I tried the following which doesn't seem optimal:

   1. Fork the package on GitHub
   2. "git clone" the package somewhere other than .julia
   3. Pkg.clone from my local cloned repo (had problems with this because 
   the package was already in .julia so deleted it first)
   4. Modify the package, commit and then Pkg.checkout
   5. Repeat step 3. until revision does what I want it to do
  - Steps 3. and 4. got me a LONG list of commits, so I introduced 
  myself to "git rebase" with disastrous results.
  6. Tried deleting the package from .julia so I could start over again 
   with "Pkg.add", but Pkg seemed to have gotten confused from my shenanigans 
   and says the package could not be found.
   7. Delete the Julia installation. Delete .julia and start again.

Disaster :)

My next attempt will be along these lines:

   1. Fork the package on GitHub
   2. Pkg.clone from my forked repo
   3. Modify the package directly from the .julia folder (hoping this will 
   avoid a ton of commits)
   4. When everything works, commit and push to my forked repo on GitHub
   5. Submit a PR from GitHub

How does that sound? Any better suggestions?

Just before submitting this question, a possibly better solution dawned on 
me. Since I have already "Pkg.add"ed the package, a git repo is already in 
.julia so I might try:

   1. Fork the package on GitHub
   2. Modify the package already inside .julia
   3. Add my fork as a remote repo
   4. When everything works, commit and push to my forked remote repo on 
   GitHub
   5. Submit a PR from GitHub

How does that sound? What do others do?

Note: The package I'm trying to modify is pure Julia so does not require 
any compiler.


[julia-users] Re: Easiest way to get indexes of elements that satisfy a certain condition

2015-11-25 Thread DNF
In Matlab, the preferred way of doing this is:

find(s < x)

In Julia, it's almost exactly the same:

find(s .< x)

If you don't need the indices, only the elements, you would do:

s[s .< x]

which, again, is almost identical to Matlab.

The reason you are getting a matrix output is that 

[1 2 5 7 3 3]

is actually a matrix (with one row) in Julia, which you then multiply with 
a (column) vector. If you want a vector, write

[1, 2, 5, 7, 3, 3]


[julia-users] Re: Easiest way to get indexes of elements that satisfy a certain condition

2015-11-25 Thread Eric Forgy
I'm not sure I understand, but how's this?

julia> s = [1,2,5,7,3,3]
6-element Array{Int64,1}:
 1
 2
 5
 7
 3
 3

julia> x = 4
4

julia> find(s.
> Hi all,
>
> So imagine I have an array. For example:
>
> s = [1 2 5 7 3 3]
>
>
> Also, I have x = 4. And I would like to have an array of indexes 
> of elements s that satisfy:
>
> s .< x
>
>  
> In Fortran or MATLAB I would do something like this:
>
> indx = 1:1:size(s) (or length(s) in MATLAB)
>
> And then:
>
> indx = pack(indx,s>x)
>
>  
> or 
>
> indx=indx(s < x)
>
>  
> In Julia I tried something like:
>
> indx = indx.*(s .< x)
>
>
> But I had a matrix in output for some reason. So what is the easiest 
> solution in this situation?
>
>

Re: [julia-users] Proposal: NoveltyColors.jl

2015-11-25 Thread Andreas Lobinger
Hello colleagues,

On Wednesday, November 25, 2015 at 12:34:46 AM UTC+1, Randy Zwitch wrote:
>
> I can't believe that a few hundred lines of code, with 5-8 colors a piece 
> is going to do anything to load times. I was just concerned about adding 
> frivolity to Colors.jl, since it has so much cited research that goes along 
> with it. That, and given that ColorBrewer.jl exists separate from Colors.jl 
> made it seem like Colors.jl might already be in a steady-state.
>

When i worked on this:
https://github.com/JuliaLang/Color.jl/pull/96
i also wondered, if it would make sense to give Color[s] an interface to 
load additional color palettes and/or functions that create palettes. I'm a 
little bit reluctant to have data only in packages

 


Re: [julia-users] Re: What is the best way to get two by two tables in Julia?

2015-11-25 Thread Milan Bouchet-Valat
Le mardi 24 novembre 2015 à 19:32 -0800, Arin Basu a écrit :
> Thanks a million Milan and Dan. I have learned hugely from the codes 
> you shared and the packages you discussed. There is a need for 
> dedicated biostatistics packages in Julia. For instance, I could not 
> find a dedicated package on regression diagnostics
I don't think this should live in a dedicated biostatistics package.
I'm a social scientist and I would find these tools useful too. Better
share our work.

> (I tried RegTools but it did not compile for some reason in my 
> machine Mac OSX El Capitan, Julia 0.4.1).
That code is fairly new, it doesn't look like it's set up to be used as
a package yet. You could file an issue in GitHub against this package,
as it seems to be actively maintained, to tell the author people are
interested in testing it. Adding a src/RegTools.jl file containing
these lines:

include("diagnostics.jl")
include("misc.jl")
include("modsel.jl")

and including the functions in a module should be enough.


Regards

> 
> Best,
> Arin
> 
> On Monday, 23 November 2015 04:53:46 UTC+13, Milan Bouchet-Valat
> wrote:
> > As I noted just a few days ago, I have written a small package to 
> > compute frequency tables from arbitrary arrays, with an optimized 
> > method for pooled data arrays : 
> > https://github.com/nalimilan/FreqTables.jl 
> > 
> > I've just pushed a fiw so it should now work on 0.4 (but not with
> > 0.3). 
> > 
> > We could easily add a method taking a DataFrame and symbol names
> > for 
> > columns to save some typing. 
> > 
> > 
> > Regards 
> > 
> > Le dimanche 22 novembre 2015 à 03:26 -0800, Dan a écrit : 
> > > Hi Arin, 
> > > It would be helpful to have more details about the input (a 
> > > dataframe?) and output (a two-by-two table or a table indexed by 
> > > categories?). Some code to give context to the question would be
> > even 
> > > more help (possibly in another language, such as R). 
> > > 
> > > Having said this, here is a starting point for some code: 
> > > 
> > > If these packages are missing Pkg.add works: 
> > > 
> > > using NamedArrays 
> > > using DataFrames 
> > > using RDatasets 
> > > 
> > > Gets the dataset and makes some categorical variables in
> > DataFrames 
> > > style: 
> > > 
> > > iris = dataset("datasets","iris") 
> > > iris[:PetalWidth] = PooledDataArray(iris[:PetalWidth]) 
> > > iris[:Species] = PooledDataArray(iris[:Species]) 
> > > 
> > > Define function for a `twobytwo` and a general categorical table 
> > > `crosstable`: 
> > > 
> > > function twobytwo(data::DataFrame,cond1,cond2) 
> > >nres= 
> > >
> > NamedArray(zeros(Int,2,2),Any[[false,true],[false,true]],["cond1","
> > co 
> > > nd2"]) 
> > >for i=1:nrow(data) 
> > >nres[Int(cond1(data[i,:]))+1,Int(cond2(data[i,:]))+1]
> > += 1 
> > >end 
> > >nres 
> > > end 
> > > 
> > > function crosstable(data::DataFrame,col1,col2) 
> > >@assert isa(data[col1],PooledDataArray) 
> > >@assert isa(data[col2],PooledDataArray) 
> > >nres= 
> > >
> > NamedArray(zeros(Int,length(data[col1].pool),length(data[col2].pool
> > )) 
> > > ,Any[data[col1].pool,data[col2].pool],[col1,col2]) 
> > >for i=1:nrow(data) 
> > >nres[data[col1].refs[i],data[col2].refs[i]] += 1 
> > >end 
> > >nres 
> > > end 
> > > 
> > > Finally, using the functions, make some tables: 
> > > 
> > > tbt = twobytwo(iris,r->r[1,:Species]=="setosa",r 
> > > ->r[1,:PetalWidth]>=1.5) 
> > > ct = crosstable(iris,:PetalWidth,:Species) 
> > > 
> > > My summary and conclusions: 
> > > 1) Julia is general purpose and with a little familiarity any
> > data 
> > > handling is possible. 
> > > 2) This is a basic data exploration operation and there must be
> > some 
> > > easy way to do this. 
> > > 
> > > Waiting for more opinions/solutions on this question, as it is
> > also 
> > > basic for my needs. 
> > > 
> > > Thanks for the question. 
> > > 
> > > On Sunday, November 22, 2015 at 3:34:56 AM UTC+2, Arin Basu
> > wrote: 
> > > > Hi All, 
> > > > 
> > > > Can you kindly advise how to get a simple way to do two by two 
> > > > tables in Julia with two categorical variables. I have tried
> > split 
> > > > -apply-combine (by function) and it works with single
> > variables, 
> > > > but with two or more variables, I cannot get the table I want. 
> > > > 
> > > > This is really an issue if we need to do statistical data
> > analysis 
> > > > in Epidemiology. 
> > > > 
> > > > Any help or advice will be greatly appreciated. 
> > > > 
> > > > Arin Basu 
> > > > 


[julia-users] Re: Proposal: NoveltyColors.jl

2015-11-25 Thread cormullion
Nice idea. I confess I'm slightly not too keen on the word "Novelty" - 
reminds me of cheap Christmas presents... :)  You could consider making the 
package a bit more general...  For my purposes I've been using a small bit 
of code that extracts a selection of colors from images to make a palette. 
Basically, it's this:

using Images, Colors, Clustering

function dominant_colors(img, n=10, i=10, tolerance=0.01; resize = 1)
w, h = size(img)
neww = round(Int, w/resize)
newh = round(Int, w/resize)
smaller_image = Images.imresize(img, (neww, newh))
imdata = convert(Array{Float64}, 
raw(separate(smaller_image).data))/256
w, h, nchannels = size(imdata)
d = transpose(reshape(imdata, w*h, nchannels))
R = kmeans(d, n, maxiter=i, tol=tolerance)
cols = RGB{Float64}[]
for i in 1:nchannels:length(R.centers)
push!(cols, RGB(R.centers[i], R.centers[i+1], R.centers[i+2]))
end
return cols, R.cweights/sum(R.cweights)
end

sorted_palette, wts = 
dominant_colors(imread("/tmp/van-gogh-starry-sky.png"), 10, 40, resize=3)

which gives a selection of colors from the image (with weights if needed). 
An interesting feature of this is that the results always vary slightly 
each time - sometimes I stack them to see the differences:








Re: [julia-users] how to make my own 3d array type

2015-11-25 Thread Tim Holy
Check out the "Types" section of the manual, esp. the part about "Composite 
Types."

Also, see base/subarray.jl if you want an example of a complex but full-
featured AbstractArray type.

Best,
--Tim

On Wednesday, November 25, 2015 03:24:39 AM Evan wrote:
> Thanks Tim,
> 
> Following the link you suggested I made the following type which does what
> I want:
> 
> immutable MyType{T,N,A<:AbstractArray} <: AbstractArray{Float64,3}
> conf_lims :: A
> end
> MyType{T,N}(conf_lims::AbstractArray{T,N}) = MyType{T,N,typeof(conf_lims)}(
> conf_lims)
> Base.size(S::MyType) = (S.count,)
> Base.linearindexing(::Type{MyType}) = Base.LinearFast();
> 
> 
> What I haven't figured out is how to add more than just the one variable;
> for instance I'd like to have a "count" variable which would be an integer?
> 
> On Monday, November 16, 2015 at 3:45:43 PM UTC+1, Tim Holy wrote:
> > Totally different error (it's a missing size method). You also need to
> > read
> > this:
> > http://docs.julialang.org/en/stable/manual/interfaces/#abstract-arrays
> > 
> > --Tim
> > 
> > On Monday, November 16, 2015 06:08:15 AM Evan wrote:
> > > Thank you, Tim, I understand the motivation for the approach in the link
> > > you sent.
> > > 
> > > I'm still unable to create the type that I want. I actually want two
> > 
> > arrays
> > 
> > > in my type, one 2d and the other 3d.
> > > 
> > > First though, I'm still stuck on just the 2d when I try to implement
> > 
> > your
> > 
> > > suggestion:
> > > julia> type MyType{T,N,A<:AbstractArray} <: AbstractArray{T,N}
> > > 
> > >var::A
> > >
> > >end
> > > 
> > > julia> MyType{T,N}(var::AbstractArray{T,N}) =
> > 
> > MyType{T,N,typeof(var)}(var)
> > 
> > > MyType{T,N,A<:AbstractArray{T,N}}
> > > 
> > > julia> aa = MyType(zeros(2,3))
> > > Error showing value of type MyType{Float64,2,Array{Float64,2}}:
> > > ERROR: MethodError: `size` has no method matching
> > 
> > size(::MyType{Float64,2,
> > 
> > > Array{Float64,2}})
> > > 
> > > Closest candidates are:
> > >   size{T,n}(::AbstractArray{T,n}, ::Any)
> > >   size(::Any, ::Integer, ::Integer, ::Integer...)
> > >  
> > >  in showarray at show.jl:1231
> > >  in anonymous at replutil.jl:29
> > >  in with_output_limit at ./show.jl:1271
> > >  in writemime at replutil.jl:28
> > >  in display at REPL.jl:114
> > >  in display at REPL.jl:117
> > >  [inlined code] from multimedia.jl:151
> > >  in display at multimedia.jl:162
> > >  in print_response at REPL.jl:134
> > >  in print_response at REPL.jl:121
> > >  in anonymous at REPL.jl:624
> > >  in run_interface at ./LineEdit.jl:1610
> > >  in run_frontend at ./REPL.jl:863
> > >  in run_repl at ./REPL.jl:167
> > >  in _start at ./client.jl:420
> > > 
> > > julia>
> > > 
> > > Performance is important, but it's not clear what I should use in place
> > 
> > of
> > 
> > > abstract types; I've tried replacing AbstractArray with just Array but
> > 
> > that
> > 
> > > does not appear to work.
> > > 
> > > On Monday, November 16, 2015 at 1:03:05 PM UTC+1, Tim Holy wrote:
> > > > This fixes two problems:
> > > > 
> > > > type MyType{T,N,A<:AbstractArray} <: AbstractArray{T,N}
> > > > 
> > > > var::A
> > > > 
> > > > end
> > > > 
> > > > MyType{T,N}(var::AbstractArray{T,N}) = MyType{T,N,typeof(var)}(var)
> > > > 
> > > > 
> > > > 
> > > > If performance matters, you should not use abstract types for fields.
> > 
> > See
> > 
> > 
> > http://docs.julialang.org/en/stable/manual/faq/#how-do-abstract-or-ambiguo
> > 
> > > > us-fields-in-types-interact-with-the-compiler and the section after
> > 
> > that.
> > 
> > > > --Tim
> > > > 
> > > > On Monday, November 16, 2015 03:54:15 AM Evan wrote:
> > > > > For a 2d array the following works:
> > > > > 
> > > > > type mytype{T}
> > > > > 
> > > > > var :: AbstractMatrix{T}
> > > > > 
> > > > > end
> > > > > 
> > > > > julia> t = mytype(zeros(4, 3))
> > > > > 
> > > > > mytype{Float64}(4x3 Array{Float64,2}:
> > > > >  0.0  0.0  0.0
> > > > >  0.0  0.0  0.0
> > > > >  0.0  0.0  0.0
> > > > >  0.0  0.0  0.0)
> > > > > 
> > > > > But how do I extend this to a 3d array?
> > > > > 
> > > > > julia> t = mytype(zeros(2, 4, 3))
> > > > > ERROR: MethodError: `convert` has no method matching
> > > > 
> > > > convert(::Type{mytype{T
> > > > 
> > > > > }}, ::Array{Float64,3})
> > > > > This may have arisen from a call to the constructor mytype{T}(...),
> > > > > since type constructors fall back to convert methods.
> > > > > 
> > > > > Closest candidates are:
> > > > >   call{T}(::Type{T}, ::Any)
> > > > >   convert{T}(::Type{T}, ::T)
> > > > >   mytype{T}(::AbstractArray{T,2})
> > > > >  
> > > > >  in call at essentials.jl:56



Re: [julia-users] how to make my own 3d array type

2015-11-25 Thread Evan
Thanks Tim,

Following the link you suggested I made the following type which does what 
I want:

immutable MyType{T,N,A<:AbstractArray} <: AbstractArray{Float64,3}
conf_lims :: A
end
MyType{T,N}(conf_lims::AbstractArray{T,N}) = MyType{T,N,typeof(conf_lims)}(
conf_lims)
Base.size(S::MyType) = (S.count,)
Base.linearindexing(::Type{MyType}) = Base.LinearFast();


What I haven't figured out is how to add more than just the one variable; 
for instance I'd like to have a "count" variable which would be an integer?




On Monday, November 16, 2015 at 3:45:43 PM UTC+1, Tim Holy wrote:
>
> Totally different error (it's a missing size method). You also need to 
> read 
> this: 
> http://docs.julialang.org/en/stable/manual/interfaces/#abstract-arrays 
>
> --Tim 
>
> On Monday, November 16, 2015 06:08:15 AM Evan wrote: 
> > Thank you, Tim, I understand the motivation for the approach in the link 
> > you sent. 
> > 
> > I'm still unable to create the type that I want. I actually want two 
> arrays 
> > in my type, one 2d and the other 3d. 
> > 
> > First though, I'm still stuck on just the 2d when I try to implement 
> your 
> > suggestion: 
> > julia> type MyType{T,N,A<:AbstractArray} <: AbstractArray{T,N} 
> >var::A 
> >end 
> > 
> > julia> MyType{T,N}(var::AbstractArray{T,N}) = 
> MyType{T,N,typeof(var)}(var) 
> > MyType{T,N,A<:AbstractArray{T,N}} 
> > 
> > julia> aa = MyType(zeros(2,3)) 
> > Error showing value of type MyType{Float64,2,Array{Float64,2}}: 
> > ERROR: MethodError: `size` has no method matching 
> size(::MyType{Float64,2, 
> > Array{Float64,2}}) 
> > Closest candidates are: 
> >   size{T,n}(::AbstractArray{T,n}, ::Any) 
> >   size(::Any, ::Integer, ::Integer, ::Integer...) 
> >  in showarray at show.jl:1231 
> >  in anonymous at replutil.jl:29 
> >  in with_output_limit at ./show.jl:1271 
> >  in writemime at replutil.jl:28 
> >  in display at REPL.jl:114 
> >  in display at REPL.jl:117 
> >  [inlined code] from multimedia.jl:151 
> >  in display at multimedia.jl:162 
> >  in print_response at REPL.jl:134 
> >  in print_response at REPL.jl:121 
> >  in anonymous at REPL.jl:624 
> >  in run_interface at ./LineEdit.jl:1610 
> >  in run_frontend at ./REPL.jl:863 
> >  in run_repl at ./REPL.jl:167 
> >  in _start at ./client.jl:420 
> > 
> > julia> 
> > 
> > Performance is important, but it's not clear what I should use in place 
> of 
> > abstract types; I've tried replacing AbstractArray with just Array but 
> that 
> > does not appear to work. 
> > 
> > On Monday, November 16, 2015 at 1:03:05 PM UTC+1, Tim Holy wrote: 
> > > This fixes two problems: 
> > > 
> > > type MyType{T,N,A<:AbstractArray} <: AbstractArray{T,N} 
> > > 
> > > var::A 
> > > 
> > > end 
> > > 
> > > MyType{T,N}(var::AbstractArray{T,N}) = MyType{T,N,typeof(var)}(var) 
> > > 
> > > 
> > > 
> > > If performance matters, you should not use abstract types for fields. 
> See 
> > > 
> > > 
> http://docs.julialang.org/en/stable/manual/faq/#how-do-abstract-or-ambiguo 
> > > us-fields-in-types-interact-with-the-compiler and the section after 
> that. 
> > > 
> > > --Tim 
> > > 
> > > On Monday, November 16, 2015 03:54:15 AM Evan wrote: 
> > > > For a 2d array the following works: 
> > > > 
> > > > type mytype{T} 
> > > > 
> > > > var :: AbstractMatrix{T} 
> > > > 
> > > > end 
> > > > 
> > > > julia> t = mytype(zeros(4, 3)) 
> > > > 
> > > > mytype{Float64}(4x3 Array{Float64,2}: 
> > > >  0.0  0.0  0.0 
> > > >  0.0  0.0  0.0 
> > > >  0.0  0.0  0.0 
> > > >  0.0  0.0  0.0) 
> > > > 
> > > > But how do I extend this to a 3d array? 
> > > > 
> > > > julia> t = mytype(zeros(2, 4, 3)) 
> > > > ERROR: MethodError: `convert` has no method matching 
> > > 
> > > convert(::Type{mytype{T 
> > > 
> > > > }}, ::Array{Float64,3}) 
> > > > This may have arisen from a call to the constructor mytype{T}(...), 
> > > > since type constructors fall back to convert methods. 
> > > > 
> > > > Closest candidates are: 
> > > >   call{T}(::Type{T}, ::Any) 
> > > >   convert{T}(::Type{T}, ::T) 
> > > >   mytype{T}(::AbstractArray{T,2}) 
> > > >   
> > > >  in call at essentials.jl:56 
>
>

[julia-users] [ANN] more error-free transformations

2015-11-25 Thread Jeffrey Sarnoff
ErrorFreeArith.jl  offers 
error-free transformations not (yet?) included in the ErrorFreeTransforms 
package by dsiem.

These operations convey the usual arithmetic result accompanied by a 
residual value that is usually lost to rounding.
This gives the correct value at twice the working precision (correctly 
rounded for +,-,*,/; still 1/(1/x) = x or x ± ulp(x)).





[julia-users] Re: Proposal: NoveltyColors.jl

2015-11-25 Thread Randy Zwitch
Novelty, as in non-rigorous. These color schemes may or may not be 
aesthetically pleasing, without any provided research to their 
"correctness" for journal printing, colorblind compatible, or any of the 
other features that Colors.jl currently provides.

On Wednesday, November 25, 2015 at 3:02:49 AM UTC-5, cormu...@mac.com wrote:
>
> Nice idea. I confess I'm slightly not too keen on the word "Novelty" - 
> reminds me of cheap Christmas presents... :)  You could consider making the 
> package a bit more general...  For my purposes I've been using a small bit 
> of code that extracts a selection of colors from images to make a palette. 
> Basically, it's this:
>
> using Images, Colors, Clustering
>
> function dominant_colors(img, n=10, i=10, tolerance=0.01; resize = 1)
> w, h = size(img)
> neww = round(Int, w/resize)
> newh = round(Int, w/resize)
> smaller_image = Images.imresize(img, (neww, newh))
> imdata = convert(Array{Float64}, 
> raw(separate(smaller_image).data))/256
> w, h, nchannels = size(imdata)
> d = transpose(reshape(imdata, w*h, nchannels))
> R = kmeans(d, n, maxiter=i, tol=tolerance)
> cols = RGB{Float64}[]
> for i in 1:nchannels:length(R.centers)
> push!(cols, RGB(R.centers[i], R.centers[i+1], R.centers[i+2]))
> end
> return cols, R.cweights/sum(R.cweights)
> end
>
> sorted_palette, wts = 
> dominant_colors(imread("/tmp/van-gogh-starry-sky.png"), 10, 40, resize=3)
>
> which gives a selection of colors from the image (with weights if needed). 
> An interesting feature of this is that the results always vary slightly 
> each time - sometimes I stack them to see the differences:
>
>
> 
>
>
>
>
>

Re: [julia-users] [ANN] more error-free transformations

2015-11-25 Thread Tom Breloff
Thanks Jeffrey. Can you expand on the specifics of the package?  What would
you say are the primary use cases? How does this differ from interval
arithmetic or Unums?

On Wednesday, November 25, 2015, Jeffrey Sarnoff 
wrote:

> ErrorFreeArith.jl  offers
> error-free transformations not (yet?) included in the ErrorFreeTransforms
> package by dsiem.
>
> These operations convey the usual arithmetic result accompanied by a
> residual value that is usually lost to rounding.
> This gives the correct value at twice the working precision (correctly
> rounded for +,-,*,/; still 1/(1/x) = x or x ± ulp(x)).
>
>
>
>