[julia-users] Re: indirect array indexing

2016-09-30 Thread apaoluzzi
OK!  many thanks for your fast reply
--a

On Friday, September 30, 2016 at 3:53:43 PM UTC+2, apao...@gmail.com wrote:
>
> Hi guys,
>
> I'm learning the language while implementing an advanced package for 
> geometric and solid modeling.
> My question is: What is the right idiom (shorter and/or faster) for 
> writing this kind of array indexing:
>
> linesFromLineArray(V,EV) = Any[[V[:,EV[:,k][1]] V[:,EV[:,k][2]]  ] for 
> k=1:size(EV,2)]
>
> The efficiency is the strongest issue, since this method should provide 
> the indirect indexing for any kind of cellular complexes.
> Many thanks for your help
>
>
> julia> V,EV
> (
> 2x10 Array{Float64,2}:
>  0.13611  0.14143  0.38501  0.42103  0.96927  0.90207  0.0 0.11508 
>  0.61437  0.52335
>  0.59824  0.58921  0.25964  0.24118  0.19741  0.34109  0.0213  0.0 
>  0.05601  0.17309,
>
> 2x5 Array{Int64,2}:
>  1  3  5  7   9
>  2  4  6  8  10)
>
> julia> linesFromLineArray(V,EV)
> 5-element Array{Any,1}:
>  2x2 Array{Float64,2}:
>  0.13611  0.14143
>  0.59824  0.58921
>  2x2 Array{Float64,2}:
>  0.38501  0.42103
>  0.25964  0.24118
>  2x2 Array{Float64,2}:
>  0.96927  0.90207
>  0.19741  0.34109
>  2x2 Array{Float64,2}:
>  0.0 0.11508
>  0.0213  0.0  
>  2x2 Array{Float64,2}:
>  0.61437  0.52335
>  0.05601  0.17309
>
>

[julia-users] Re: indirect array indexing

2016-09-30 Thread vavasis
First, it is better to use a 3-dimensional array rather than an array of 
arrays.  If you must use an array of arrays, it is better for the outer 
array to be of type Array{Array{Float,2},1} rather than Array{Any,1}.

For max performance, it is hard to beat explicit loops:

result = Array(Float64, size(V,1), 2, size(EV,2))
for k = 1 : size(EV,2)
for j = 1 : 2
 for i = 1 : size(V,1)
   result[i,j,k] = V[i,EV[j,k]]
 end
 end
end

There are some macro packages like Einsum.jl that let you hide the explicit 
loops.

-- Steve Vavasis




On Friday, September 30, 2016 at 9:53:43 AM UTC-4, apao...@gmail.com wrote:
>
> Hi guys,
>
> I'm learning the language while implementing an advanced package for 
> geometric and solid modeling.
> My question is: What is the right idiom (shorter and/or faster) for 
> writing this kind of array indexing:
>
> linesFromLineArray(V,EV) = Any[[V[:,EV[:,k][1]] V[:,EV[:,k][2]]  ] for 
> k=1:size(EV,2)]
>
> The efficiency is the strongest issue, since this method should provide 
> the indirect indexing for any kind of cellular complexes.
> Many thanks for your help
>
>
> julia> V,EV
> (
> 2x10 Array{Float64,2}:
>  0.13611  0.14143  0.38501  0.42103  0.96927  0.90207  0.0 0.11508 
>  0.61437  0.52335
>  0.59824  0.58921  0.25964  0.24118  0.19741  0.34109  0.0213  0.0 
>  0.05601  0.17309,
>
> 2x5 Array{Int64,2}:
>  1  3  5  7   9
>  2  4  6  8  10)
>
> julia> linesFromLineArray(V,EV)
> 5-element Array{Any,1}:
>  2x2 Array{Float64,2}:
>  0.13611  0.14143
>  0.59824  0.58921
>  2x2 Array{Float64,2}:
>  0.38501  0.42103
>  0.25964  0.24118
>  2x2 Array{Float64,2}:
>  0.96927  0.90207
>  0.19741  0.34109
>  2x2 Array{Float64,2}:
>  0.0 0.11508
>  0.0213  0.0  
>  2x2 Array{Float64,2}:
>  0.61437  0.52335
>  0.05601  0.17309
>
>

Re: [julia-users] Generic Linux binaries for julia-0.4.6

2016-09-30 Thread Mauro
Adapting the links found on
http://julialang.org/downloads/oldreleases.html
should work.  E.g.:

https://s3.amazonaws.com/julialang/bin/winnt/x64/0.4/julia-0.4.7-win64.exe

to

https://s3.amazonaws.com/julialang/bin/winnt/x64/0.4/julia-0.4.6-win64.exe

On Fri, 2016-09-30 at 23:15, amik...@gmail.com wrote:
> Hello,
>
> Sorry for the possibly dumb question: for benchmarking purposes I'm looking
> for the generic Linux binaries for julia-0.4.6 (I don't want to compile
> from the source). Can they still be found somewhere?
>
> Many thanks,


[julia-users] Generic Linux binaries for julia-0.4.6

2016-09-30 Thread amiksvi
Hello,

Sorry for the possibly dumb question: for benchmarking purposes I'm looking 
for the generic Linux binaries for julia-0.4.6 (I don't want to compile 
from the source). Can they still be found somewhere?

Many thanks,


[julia-users] Simple test functions generates different codegen

2016-09-30 Thread mmh
I would have that thought that these two function would produce the same 
code, but they do not.

Could someone care to explain the difference and which is preferred and why


http://pastebin.com/GJ8YPfV3

function test1(x)
y = 2.0
u = 2.320
x < 0 && (u = 32.0)
x > 1 && (u = 1.0)
return u + y
end


function test2(x)
y = 2.0
u = 2.320
u = x < 0 ? 32.0 : u
u = x > 1 ? 1.0 : u
return u + y
end


@code_llvm test1(2.2)

@code_llvm test2(2.2)



[julia-users] Parallelizing Error on 0.5 on Ubuntu

2016-09-30 Thread ABB
On this Julia version:

   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.5.0 (2016-09-19 18:14 UTC)
 _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
|__/   |  x86_64-pc-linux-gnu

running on:

Ubuntu 14.04.3 LTS

I am trying to do a Monte Carlo simulation in parallel across 36 workers.  

I have two problems (at least).

1.  Some of the workers terminate at the beginning of the simulation, but I 
don't understand the error message:

Worker 5 terminated.ERROR (unhandled task failure): ProcessExitedException()
 in yieldto(::Task, ::ANY) at ./event.jl:136
 in wait() at ./event.jl:169
 in wait(::Condition) at ./event.jl:27
 in wait(::Channel{Any}) at ./channels.jl:92
 in take!(::Channel{Any}) at ./channels.jl:73
 in #remotecall_fetch#606(::Array{Any,1}, ::Function, ::Function, 
::Base.Worker, ::Function, ::Vararg{Any,N}) at ./multi.jl:1066
 in remotecall_fetch(::Function, ::Base.Worker, ::Function, 
::Vararg{Any,N}) at ./multi.jl:1062
 in #remotecall_fetch#609(::Array{Any,1}, ::Function, ::Function, ::Int64, 
::Function, ::Vararg{Any,N}) at ./multi.jl:1080
 in remotecall_fetch(::Function, ::Int64, ::Function, ::Vararg{Any,N}) at 
./multi.jl:1080
 in 
(::Base.##667#668{Base.#+,ProjectModule.##45#47{Int64,Array{Any,1},Array{Any,2}},UnitRange{Int64},Array{UnitRange{Int64},1}})()
 
at ./multi.jl:1998

This is not a huge problem as the rest of the workers keep going and can 
finish the simulation, but I would like to understand what is going on, if 
possible.  (And maybe how to fix it so as to use those workers.)  

2.  The more important problem is that at the end of the simulation, I run 
into other errors and nothing is returned.  My (uninformed and probably 
wrong) guess is that there is something the program doesn't like about the 
fact that the different workers are finishing at different times?  The 
errors I get are:


ERROR (unhandled task failure): EOFError: read end of file
Worker 16 terminated.ERROR (unhandled task failure): 
ProcessExitedException()
 in yieldto(::Task, ::ANY) at ./event.jl:136
 in wait() at ./event.jl:169
 in wait(::Condition) at ./event.jl:27
 in wait(::Channel{Any}) at ./channels.jl:92
 in take!(::Channel{Any}) at ./channels.jl:73
 in #remotecall_fetch#606(::Array{Any,1}, ::Function, ::Function, 
::Base.Worker, ::Function, ::Vararg{Any,N}) at ./multi.jl:1066
 in remotecall_fetch(::Function, ::Base.Worker, ::Function, 
::Vararg{Any,N}) at ./multi.jl:1062
 in #remotecall_fetch#609(::Array{Any,1}, ::Function, ::Function, ::Int64, 
::Function, ::Vararg{Any,N}) at ./multi.jl:1080
 in remotecall_fetch(::Function, ::Int64, ::Function, ::Vararg{Any,N}) at 
./multi.jl:1080
 in 
(::Base.##667#668{Base.#+,ProjectModule.##45#47{Int64,Array{Any,1},Array{Any,2}},UnitRange{Int64},Array{UnitRange{Int64},1}})()
 
at ./multi.jl:1998

And - 

ERROR: LoadError: ProcessExitedException()
 in wait(::Task) at ./task.jl:135
 in collect_to!(::Array{Array{Float64,2},1}, 
::Base.Generator{Array{Task,1},Base.#wait}, ::Int64, ::Int64) at 
./array.jl:340
 in collect(::Base.Generator{Array{Task,1},Base.#wait}) at ./array.jl:308
 in preduce(::Function, ::Function, ::UnitRange{Int64}) at ./multi.jl:2002
 in (::ProjectModule.##44#46{Int64,Array{Any,1},Array{Any,2},Int64})() at 
./multi.jl:2011
 in macro expansion at ./task.jl:326 [inlined]
 in #OuterSim#43(::Int64, ::Int64, ::Int64, ::Array{Any,1}, ::Array{Any,2}, 
::Function, ::Int64) at /home/ubuntu/dynhosp/DataStructs.jl:1321
 in (::ProjectModule.#kw##OuterSim)(::Array{Any,1}, 
::ProjectModule.#OuterSim, ::Int64) at ./:0
 in include_from_node1(::String) at ./loading.jl:488
 in process_options(::Base.JLOptions) at ./client.jl:262
 in _start() at ./client.jl:318
while loading /home/ubuntu/dynhosp/Run.jl, in expression starting on line 9

And finally: 

ERROR (unhandled task failure): On worker 9:
ArgumentError: Dict(kv): kv needs to be an iterator of tuples or pairs
 in Type at ./dict.jl:388
 in CalcWTP at /home/ubuntu/dynhosp/DataStructs.jl:728
 in WTPMap at /home/ubuntu/dynhosp/DataStructs.jl:747
 in #PSim#32 at /home/ubuntu/dynhosp/DataStructs.jl:1024
 in #45 at ./multi.jl:2016
 in #625 at ./multi.jl:1421
 in run_work_thunk at ./multi.jl:1001
 in macro expansion at ./multi.jl:1421 [inlined]
 in #624 at ./event.jl:68
 in #remotecall_fetch#606(::Array{Any,1}, ::Function, ::Function, 
::Base.Worker, ::Function, ::Vararg{Any,N}) at ./multi.jl:1070
 in remotecall_fetch(::Function, ::Base.Worker, ::Function, 
::Vararg{Any,N}) at ./multi.jl:1062
 in #remotecall_fetch#609(::Array{Any,1}, ::Function, ::Function, ::Int64, 
::Function, ::Vararg{Any,N}) at ./multi.jl:1080
 in remotecall_fetch(::Function, ::Int64, ::Function, ::Vararg{Any,N}) at 
./multi.jl:1080
 in 

[julia-users] JPEG 2000 values are doubled

2016-09-30 Thread bogdan . grama
 

On a simple conversion to UInt16 TIFF, the result pixel values are doubled.

As sample data use any of the Sentinel 2 bands from Amazon S3:
http://sentinel-s2-l1c.s3-website.eu-central-1.amazonaws.com/#tiles/35/T/MN/2016/8/29/0/


using Images
using FileIO

img = load("B01.jp2");
save("B01julia.tif", img);


I think the load function is responsible for this.
Any explanation?


[julia-users] Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

2016-09-30 Thread Chris Rackauckas
Thanks for the update.

On Thursday, September 29, 2016 at 6:31:29 PM UTC-7, Tim Besard wrote:
>
> Hi all,
>
> CUDAdrv.jl  is Julia wrapper for 
> the CUDA driver API -- not to be confused with its counterpart CUDArt.jl 
>  which wraps the slightly 
> higher-level CUDA runtime API.
>
> The package doesn't feature many high-level or easy-to-use wrappers, but 
> focuses on providing the necessary functionality for other packages to 
> build upon. For example, CUDArt uses CUDAdrv for launching kernels, while 
> CUDAnative (the in-development native programming interface) completely 
> relies on CUDAdrv for all GPU interactions.
>
> It features a ccall-like cudacall interface for launching kernels and 
> passing values:
> using CUDAdrv
> using Base.Test
>
> dev = CuDevice(0)
> ctx = CuContext(dev)
>
> md = CuModuleFile(joinpath(dirname(@__FILE__), "vadd.ptx"))
> vadd = CuFunction(md, "kernel_vadd")
>
> dims = (3,4)
> a = round(rand(Float32, dims) * 100)
> b = round(rand(Float32, dims) * 100)
>
> d_a = CuArray(a)
> d_b = CuArray(b)
> d_c = CuArray(Float32, dims)
>
> len = prod(dims)
> cudacall(vadd, len, 1, (DevicePtr{Cfloat},DevicePtr{Cfloat},DevicePtr{
> Cfloat}), d_a, d_b, d_c)
> c = Array(d_c)
> @test a+b ≈ c
>
> destroy(ctx)
>
> For documentation, refer to the NVIDIA docs 
> . Even though they don't 
> fully match what CUDAdrv.jl implements, the package is well tested, and 
> redocumenting the entire thing is too much work.
>
> Current master of this package only supports 0.5, but there's a tagged 
> version supporting 0.4 (as CUDArt.jl does so as well). It has been tested 
> on CUDA 5.0 up to 8.0, but there might always be issues with certain 
> versions (as the wrappers aren't auto-generated, and probably will never be 
> due to how NVIDIA has implemented cuda.h)
>
> Anybody thinking there's a lot of overlap between CUDArt and CUDAdrv is 
> completely right, but it mimics the overlap between CUDA's runtime and 
> driver APIs as in some cases we do specifically need one or the other (eg., 
> CUDAnative wouldn't work with only the runtime API). There's also some 
> legacy at the Julia side: CUDAdrv.jl is based on CUDA.jl, while CUDArt.jl 
> has been an independent effort.
>
>
> In other news, we have recently *deprecated the old CUDA.jl package*. All 
> users should now use either CUDArt.jl or CUDAdrv.jl, depending on what 
> suits them best. Neither is a drop-in replacement, but the changes should 
> be minor. At the very least, users will have to change the kernel launch 
> syntax, which should use cudacall as shown above. In the future, we might 
> re-use the CUDA.jl package name for the native programming interface 
> currently at CUDAnative.jl
>
>
> Best,
> Tim
>


Re: [julia-users] Re: eval in current scope

2016-09-30 Thread Cedric St-Jean
Looks like I was accidentally on 0.4, and hygiene rules changed in 0.5. 
This works:

using MacroTools

macro withself(fdef)
@assert @capture(fdef, fname_() = expr_)
esc(:(@generated function $(fname)(self)
expr = $(Expr(:quote, expr))
# Replace this with recursive traversal. For demo, I'm assuming a 
single function call
@assert @capture(expr, f_(args__))
:(($f)($([arg in fieldnames(self) ? Expr(:., :self, QuoteNode(arg)) 
: arg for arg in args]...)))
end))
end

type A
ii
oo
end

a = 20
@withself foo() = ii + oo + a

foo(A(33, 44))  # 97



On Friday, September 30, 2016 at 9:59:37 AM UTC-4, Cedric St-Jean wrote:
>
> This only works if A and type_fields are defined in the same module 
>> though. Although to be honest it surprised me a bit that it works at all, I 
>> guess the type definitions are evaluated prior to macro expansions? 
>>
>
> Good point. 
>
> You can use a generated function then:
>
> using MacroTools
>
> macro withself(fdef)
> @assert @capture(fdef, fname_() = expr_) # could accept other arguments
> :(@generated function $(esc(fname))(self)
> expr = $(Expr(:quote, expr))   # grab the `expr` from the macro 
> body
> # Replace this with recursive traversal. For demo, I'm assuming a 
> single function call
> @assert @capture(expr, f_(args__))
> :($f($([arg in fieldnames(self) ? Expr(:., :self, QuoteNode(arg)) 
> : arg for arg in args]...)))
> end)
> end
>
> type A
> ii
> oo
> end
>
> a = 20
> @withself foo() = ii + oo + a
>
> foo(A(33, 44))  # 97
>
> Since generated functions are passed the _type_ of their arguments, which 
> is what you were looking for. It's an abuse of Julia's metaprogramming 
> facilities in strict Lisp tradition ;)
>
> On Friday, September 30, 2016 at 5:32:49 AM UTC-4, Marius Millea wrote:
>
>>
>> Would eval'ing the type inside the macro work? This shows [:x, :y]
>>>
>>>
>> This only works if A and type_fields are defined in the same module 
>> though. Although to be honest it surprised me a bit that it works at all, I 
>> guess the type definitions are evaluated prior to macro expansions? 
>>
>>
>> A macro which defines a type-specific version @self_MyType of your @self 
>>> macro at the definition of the type: 
>>
>>
>> Yea, the solutions both me and fcard coded up originally involved having 
>> to call a macro on the type definition, this is precisely what I'm trying 
>> to get rid of right now. The reason for not using @unpack is just that its 
>> more verbose than this solution (at the price of the type redefinition 
>> thing, but for me its a fine tradeoff). It *really* like getting to write 
>> super concise functions which read just like the math they represent, 
>> nothing extra distracting, e.g. from my actual code:
>>
>> """Hubble constant at redshift z"""
>> @self Params Hubble(z) = Hfac*sqrt(ρx_over_ωx*((ωc+ωb)*(1+z)^3 + ωk*(1+z
>> )^2 + ωΛ) + ργ(z) + ρν(z))
>>
>>
>> """Optical depth between two redshifts given a free electron fraction 
>> history Xe"""
>> @self Params function τ(Xe::Function, z1, z2)
>> σT*(ωb*ρx_over_ωx)/mH*(1-Yp) * quad(z->Xe(z)/Hubble(z)*(1+z)^2, z1, 
>> z2)
>> end
>>
>>
>>
>>
>>
>>
>>  
>>
>>

Re: [julia-users] Re: eval in current scope

2016-09-30 Thread Cedric St-Jean

>
> This only works if A and type_fields are defined in the same module 
> though. Although to be honest it surprised me a bit that it works at all, I 
> guess the type definitions are evaluated prior to macro expansions? 
>

Good point. 

You can use a generated function then:

using MacroTools

macro withself(fdef)
@assert @capture(fdef, fname_() = expr_) # could accept other arguments
:(@generated function $(esc(fname))(self)
expr = $(Expr(:quote, expr))   # grab the `expr` from the macro body
# Replace this with recursive traversal. For demo, I'm assuming a 
single function call
@assert @capture(expr, f_(args__))
:($f($([arg in fieldnames(self) ? Expr(:., :self, QuoteNode(arg)) : 
arg for arg in args]...)))
end)
end

type A
ii
oo
end

a = 20
@withself foo() = ii + oo + a

foo(A(33, 44))  # 97

Since generated functions are passed the _type_ of their arguments, which 
is what you were looking for. It's an abuse of Julia's metaprogramming 
facilities in strict Lisp tradition ;)

On Friday, September 30, 2016 at 5:32:49 AM UTC-4, Marius Millea wrote:

>
> Would eval'ing the type inside the macro work? This shows [:x, :y]
>>
>>
> This only works if A and type_fields are defined in the same module 
> though. Although to be honest it surprised me a bit that it works at all, I 
> guess the type definitions are evaluated prior to macro expansions? 
>
>
> A macro which defines a type-specific version @self_MyType of your @self 
>> macro at the definition of the type: 
>
>
> Yea, the solutions both me and fcard coded up originally involved having 
> to call a macro on the type definition, this is precisely what I'm trying 
> to get rid of right now. The reason for not using @unpack is just that its 
> more verbose than this solution (at the price of the type redefinition 
> thing, but for me its a fine tradeoff). It *really* like getting to write 
> super concise functions which read just like the math they represent, 
> nothing extra distracting, e.g. from my actual code:
>
> """Hubble constant at redshift z"""
> @self Params Hubble(z) = Hfac*sqrt(ρx_over_ωx*((ωc+ωb)*(1+z)^3 + ωk*(1+z)^
> 2 + ωΛ) + ργ(z) + ρν(z))
>
>
> """Optical depth between two redshifts given a free electron fraction 
> history Xe"""
> @self Params function τ(Xe::Function, z1, z2)
> σT*(ωb*ρx_over_ωx)/mH*(1-Yp) * quad(z->Xe(z)/Hubble(z)*(1+z)^2, z1, z2
> )
> end
>
>
>
>
>
>
>  
>
>

Re: [julia-users] Re: Julia-i18n logo proposal

2016-09-30 Thread Stefan Karpinski
Lovely logo for i18n. 

On Fri, Sep 30, 2016 at 5:51 AM, Waldir Pimenta 
wrote:

> What a nice coincidence! That was totally unintended :)
>
> By the way, here's a reference for the most used writing systems:
> https://en.wikipedia.org/wiki/List_of_writing_systems#List_
> of_writing_scripts_by_adoption
>
>
> On Friday, September 30, 2016 at 9:16:05 AM UTC+1, Kenta Sato wrote:
>>
>> "朱" means a kind of red colors in Japanese (https://ja.wikipedia.org/wiki
>> /%E6%9C%B1%E8%89%B2). You placed it at the best circle of the logo.
>>
>> On Friday, September 30, 2016 at 9:47:04 AM UTC+9, Waldir Pimenta wrote:
>>>
>>> Hi all. I made a proposal for the logo for the Julia-i18n organization:
>>> http://imgh.us/julia-i18n_1.svg
>>>
>>> It uses the three most used scripts worldwide, and the characters are
>>> actually the start of the word “Julia” as written in each of those scripts.
>>>
>>> Looking forward to know what you guys think.
>>>
>>> --Waldir
>>>
>>


[julia-users] indirect array indexing

2016-09-30 Thread apaoluzzi
Hi guys,

I'm learning the language while implementing an advanced package for 
geometric and solid modeling.
My question is: What is the right idiom (shorter and/or faster) for writing 
this kind of array indexing:

linesFromLineArray(V,EV) = Any[[V[:,EV[:,k][1]] V[:,EV[:,k][2]]  ] for 
k=1:size(EV,2)]

The efficiency is the strongest issue, since this method should provide the 
indirect indexing for any kind of cellular complexes.
Many thanks for your help


julia> V,EV
(
2x10 Array{Float64,2}:
 0.13611  0.14143  0.38501  0.42103  0.96927  0.90207  0.0 0.11508 
 0.61437  0.52335
 0.59824  0.58921  0.25964  0.24118  0.19741  0.34109  0.0213  0.0 
 0.05601  0.17309,

2x5 Array{Int64,2}:
 1  3  5  7   9
 2  4  6  8  10)

julia> linesFromLineArray(V,EV)
5-element Array{Any,1}:
 2x2 Array{Float64,2}:
 0.13611  0.14143
 0.59824  0.58921
 2x2 Array{Float64,2}:
 0.38501  0.42103
 0.25964  0.24118
 2x2 Array{Float64,2}:
 0.96927  0.90207
 0.19741  0.34109
 2x2 Array{Float64,2}:
 0.0 0.11508
 0.0213  0.0  
 2x2 Array{Float64,2}:
 0.61437  0.52335
 0.05601  0.17309



[julia-users] Seg fault when using Logging.jl

2016-09-30 Thread Stefan Kruger
Not sure if this is the right place for this..

I encountered a seg fault in v.0.5 (also in v.0.4) when I introduced 
Logging.jl - my set of tests runs fine without Logging, but crashes 
immediately when I try to log something:

signal (11): Segmentation fault: 11

while loading /Users/stefan/.julia/v0.5/Couchzilla/test/crud_tests.jl, in 
expression starting on line 1

uv_write2 at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/deps/srccache/libuv-8d5131b6c1595920dd30644cd1435b4f344b46c8/src/unix/stream.c:1389

uv_write at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/deps/srccache/libuv-8d5131b6c1595920dd30644cd1435b4f344b46c8/src/unix/stream.c:1475

jl_uv_write at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/jl_uv.c:424

uv_write at ./stream.jl:809

unsafe_write at ./stream.jl:830

write at ./io.jl:175

print at ./strings/io.jl:70 [inlined]

with_output_color at ./util.jl:302

jl_call_method_internal at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/./julia_internal.h:189 
[inlined]

jl_apply_generic at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/gf.c:1942

jl_apply at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/./julia.h:1392 
[inlined]

jl_f__apply at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/builtins.c:547

print_with_color at ./util.jl:306

jl_call_method_internal at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/./julia_internal.h:189 
[inlined]

jl_apply_generic at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/gf.c:1942

err at /Users/stefan/.julia/v0.5/Logging/src/Logging.jl:53

#relax#2 at /Users/stefan/.julia/v0.5/Couchzilla/src/utils.jl:97

unknown function (ip: 0x318f98bb6)

jl_call_method_internal at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/./julia_internal.h:189 
[inlined]

jl_apply_generic at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/gf.c:1942

#relax at ./:0

#readdoc#6 at /Users/stefan/.julia/v0.5/Couchzilla/src/database.jl:177

readdoc at /Users/stefan/.julia/v0.5/Couchzilla/src/database.jl:126

unknown function (ip: 0x318f94f66)

jl_call_method_internal at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/./julia_internal.h:189 
[inlined]

jl_apply_generic at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/gf.c:1942

macro expansion; at 
/Users/stefan/.julia/v0.5/Couchzilla/test/crud_tests.jl:4 [inlined]

macro expansion; at ./test.jl:672 [inlined]

anonymous at ./ (unknown line)

unknown function (ip: 0x318f9484f)

jl_call_method_internal at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/./julia_internal.h:189 
[inlined]

jl_toplevel_eval_flex at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/toplevel.c:569

jl_parse_eval_all at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/ast.c:717

jl_load at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/toplevel.c:596 
[inlined]

jl_load_ at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/toplevel.c:605

include_from_node1 at ./loading.jl:488

jlcall_include_from_node1_20125 at 
/Applications/Julia-0.5.app/Contents/Resources/julia/lib/julia/sys.dylib 
(unknown line)

jl_call_method_internal at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/./julia_internal.h:189 
[inlined]

jl_apply_generic at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/gf.c:1942

do_call at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/interpreter.c:66

eval at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/interpreter.c:190

eval_body at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/interpreter.c:534

eval_body at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/interpreter.c:515

jl_interpret_call at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/interpreter.c:573

jl_toplevel_eval_flex at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/toplevel.c:572

jl_parse_eval_all at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/ast.c:717

jl_load at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/toplevel.c:596 
[inlined]

jl_load_ at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/toplevel.c:605

include_from_node1 at ./loading.jl:488

jlcall_include_from_node1_20125 at 
/Applications/Julia-0.5.app/Contents/Resources/julia/lib/julia/sys.dylib 
(unknown line)

jl_call_method_internal at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/./julia_internal.h:189 
[inlined]

jl_apply_generic at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/gf.c:1942

process_options at ./client.jl:262

_start at ./client.jl:318

jlcall__start_21452 at 
/Applications/Julia-0.5.app/Contents/Resources/julia/lib/julia/sys.dylib 
(unknown line)

jl_call_method_internal at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/./julia_internal.h:189 
[inlined]

jl_apply_generic at 
/Users/osx/buildbot/slave/package_osx10_9-x64/build/src/gf.c:1942

true_main at /usr/local/bin/julia (unknown line)

[julia-users] Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

2016-09-30 Thread Simon Danisch
Great work! :)

Well, I think GPUArrays should be the right place! If it is the right place 
depends on how much time and cooperation I get ;)
The plan is to integrate all these 3rd party libraries. If you could help 
me with that, it would already be a great first step to establish that 
library :)

Am Freitag, 30. September 2016 03:31:29 UTC+2 schrieb Tim Besard:
>
> Hi all,
>
> CUDAdrv.jl  is Julia wrapper for 
> the CUDA driver API -- not to be confused with its counterpart CUDArt.jl 
>  which wraps the slightly 
> higher-level CUDA runtime API.
>
> The package doesn't feature many high-level or easy-to-use wrappers, but 
> focuses on providing the necessary functionality for other packages to 
> build upon. For example, CUDArt uses CUDAdrv for launching kernels, while 
> CUDAnative (the in-development native programming interface) completely 
> relies on CUDAdrv for all GPU interactions.
>
> It features a ccall-like cudacall interface for launching kernels and 
> passing values:
> using CUDAdrv
> using Base.Test
>
> dev = CuDevice(0)
> ctx = CuContext(dev)
>
> md = CuModuleFile(joinpath(dirname(@__FILE__), "vadd.ptx"))
> vadd = CuFunction(md, "kernel_vadd")
>
> dims = (3,4)
> a = round(rand(Float32, dims) * 100)
> b = round(rand(Float32, dims) * 100)
>
> d_a = CuArray(a)
> d_b = CuArray(b)
> d_c = CuArray(Float32, dims)
>
> len = prod(dims)
> cudacall(vadd, len, 1, (DevicePtr{Cfloat},DevicePtr{Cfloat},DevicePtr{
> Cfloat}), d_a, d_b, d_c)
> c = Array(d_c)
> @test a+b ≈ c
>
> destroy(ctx)
>
> For documentation, refer to the NVIDIA docs 
> . Even though they don't 
> fully match what CUDAdrv.jl implements, the package is well tested, and 
> redocumenting the entire thing is too much work.
>
> Current master of this package only supports 0.5, but there's a tagged 
> version supporting 0.4 (as CUDArt.jl does so as well). It has been tested 
> on CUDA 5.0 up to 8.0, but there might always be issues with certain 
> versions (as the wrappers aren't auto-generated, and probably will never be 
> due to how NVIDIA has implemented cuda.h)
>
> Anybody thinking there's a lot of overlap between CUDArt and CUDAdrv is 
> completely right, but it mimics the overlap between CUDA's runtime and 
> driver APIs as in some cases we do specifically need one or the other (eg., 
> CUDAnative wouldn't work with only the runtime API). There's also some 
> legacy at the Julia side: CUDAdrv.jl is based on CUDA.jl, while CUDArt.jl 
> has been an independent effort.
>
>
> In other news, we have recently *deprecated the old CUDA.jl package*. All 
> users should now use either CUDArt.jl or CUDAdrv.jl, depending on what 
> suits them best. Neither is a drop-in replacement, but the changes should 
> be minor. At the very least, users will have to change the kernel launch 
> syntax, which should use cudacall as shown above. In the future, we might 
> re-use the CUDA.jl package name for the native programming interface 
> currently at CUDAnative.jl
>
>
> Best,
> Tim
>


[julia-users] Re: Julia-i18n logo proposal

2016-09-30 Thread Waldir Pimenta
What a nice coincidence! That was totally unintended :)

By the way, here's a reference for the most used writing systems: 
https://en.wikipedia.org/wiki/List_of_writing_systems#List_of_writing_scripts_by_adoption

On Friday, September 30, 2016 at 9:16:05 AM UTC+1, Kenta Sato wrote:
>
> "朱" means a kind of red colors in Japanese (
> https://ja.wikipedia.org/wiki/%E6%9C%B1%E8%89%B2). You placed it at the 
> best circle of the logo. 
>
> On Friday, September 30, 2016 at 9:47:04 AM UTC+9, Waldir Pimenta wrote:
>>
>> Hi all. I made a proposal for the logo for the Julia-i18n organization: 
>> http://imgh.us/julia-i18n_1.svg
>>
>> It uses the three most used scripts worldwide, and the characters are 
>> actually the start of the word “Julia” as written in each of those scripts.
>>
>> Looking forward to know what you guys think.
>>
>> --Waldir
>>
>

Re: [julia-users] Re: eval in current scope

2016-09-30 Thread Marius Millea


> Would eval'ing the type inside the macro work? This shows [:x, :y]
>
>
This only works if A and type_fields are defined in the same module though. 
Although to be honest it surprised me a bit that it works at all, I guess 
the type definitions are evaluated prior to macro expansions? 


A macro which defines a type-specific version @self_MyType of your @self 
> macro at the definition of the type: 


Yea, the solutions both me and fcard coded up originally involved having to 
call a macro on the type definition, this is precisely what I'm trying to 
get rid of right now. The reason for not using @unpack is just that its 
more verbose than this solution (at the price of the type redefinition 
thing, but for me its a fine tradeoff). It *really* like getting to write 
super concise functions which read just like the math they represent, 
nothing extra distracting, e.g. from my actual code:

"""Hubble constant at redshift z"""
@self Params Hubble(z) = Hfac*sqrt(ρx_over_ωx*((ωc+ωb)*(1+z)^3 + ωk*(1+z)^2 
+ ωΛ) + ργ(z) + ρν(z))


"""Optical depth between two redshifts given a free electron fraction 
history Xe"""
@self Params function τ(Xe::Function, z1, z2)
σT*(ωb*ρx_over_ωx)/mH*(1-Yp) * quad(z->Xe(z)/Hubble(z)*(1+z)^2, z1, z2)
end






 



Re: [julia-users] Re: Julia-i18n logo proposal

2016-09-30 Thread henri.gir...@gmail.com

赤 chinese /red /scarlet (very near) lol
Le 30/09/2016 à 10:16, Kenta Sato a écrit :
"朱" means a kind of red colors in Japanese 
(https://ja.wikipedia.org/wiki/%E6%9C%B1%E8%89%B2). You placed it at 
the best circle of the logo.


On Friday, September 30, 2016 at 9:47:04 AM UTC+9, Waldir Pimenta wrote:

Hi all. I made a proposal for the logo for the Julia-i18n
organization: http://imgh.us/julia-i18n_1.svg


It uses the three most used scripts worldwide, and the characters
are actually the start of the word “Julia” as written in each of
those scripts.

Looking forward to know what you guys think.

--Waldir





[julia-users] Re: Julia-i18n logo proposal

2016-09-30 Thread Kenta Sato
"朱" means a kind of red colors in Japanese 
(https://ja.wikipedia.org/wiki/%E6%9C%B1%E8%89%B2). You placed it at the 
best circle of the logo. 

On Friday, September 30, 2016 at 9:47:04 AM UTC+9, Waldir Pimenta wrote:
>
> Hi all. I made a proposal for the logo for the Julia-i18n organization: 
> http://imgh.us/julia-i18n_1.svg
>
> It uses the three most used scripts worldwide, and the characters are 
> actually the start of the word “Julia” as written in each of those scripts.
>
> Looking forward to know what you guys think.
>
> --Waldir
>


Re: [julia-users] Re: eval in current scope

2016-09-30 Thread Mauro
On Fri, 2016-09-30 at 00:53, Marius Millea  wrote:
> I think there's at least once scenario where eval-in-a-macro is not a
> mistake, mainly when you want to generate some code that depends on 1) some
> passed in expression and 2) something which can only be known at runtime.

I think one of the problems with macro-evals is that (at least the
advanced user) will expect a macro to do a pure syntax transform,
independent of the state of the runtime.  Anything else would be
surprising.  Maybe macros which call `eval` should have names with an
`!` to advertise their non-standard-ness?

> Here's my example:
>
> The macro (@self) which I'm writing takes a type name and a function
> definition, and gives the function a "self" argument of that type and
> rewrites all occurrences of the type's fields, X, to self.X. Effectively it
> takes this:
>
> type MyType
> x
> end
>
> @self MyType function inc()
> x += 1
> end
>
> and spits out:
>
> function inc(self::MyType)
> self.x += 1
> end
>
> (if this sounds familiar, its because I've discussed it here before, which
> spun off this , that I'm
> currently working on tweaking)
>
>
> To do this my code needs to modify the function expression, but this
> modification depends on fieldnames(MyType), which can *only* be known at
> runtime. Hence what I'm doing is,
>
> macro self(typ,func)
>
> function modify_func(fieldnames)
> # in here I have access to `func` expression *and*
> `fieldnames(typ)` as evaluated at runtime
> # return modified func expression
> end
>
> quote
> $(esc(:eval))($modify_func(fieldnames($(esc(typ)
> end
>
> end
>
> I don't see a cleaner way of doing this, but I'm happy to take suggestions.
>
> (Btw: my original question was w.r.t. that last "eval', which makes it so
> that currently this doesn't work on function closures. I'm still processing
> the several suggestions in this context...)
>
>
> On Tuesday, September 27, 2016 at 5:12:44 PM UTC+2, Steven G. Johnson wrote:
>>
>> On Tuesday, September 27, 2016 at 10:27:59 AM UTC-4, Stefan Karpinski
>> wrote:
>>>
>>> But note that if you want some piece of f to be "pasted" from the user
>>> and have access to certain parts of the local state of f, it's probably a
>>> much better design to let the user pass in a function which f calls,
>>> passing the function the state that it should have access to:
>>>
>>
>> Right; using "eval" in a function is almost always a mistake, an
>> indication that you should really be using a higher-order function.
>> 
>>


Re: [julia-users] Re: eval in current scope

2016-09-30 Thread Mauro
You could do it like so:

A macro which defines a type-specific version @self_MyType of your @self
macro at the definition of the type:

@with_self type MyType
   x
end

Then use that generated macro when defining the function:

@self_MyType function inc()
x += 1
end

This avoids any at little extra cost.  The only down side is that you
cannot easily handle SomeoneElsesType.  I do something similar in
https://github.com/mauro3/Parameters.jl/blob/cb147097de1cad1dd5c089243743f21a576a4a32/src/Parameters.jl#L381
where I construct a type specific macro: @unpack_MyType.

(And on a side note, and I think repeating myself from an earlier
thread, why don't you use Parameters.jl's @unpack instead?  Consider
this:

type MyType
  delta
end

# somewhere else, maybe far away, define:
@self MyType function f(x)
  sin(pi*x + delta)
end

# ...
# some time far in the future add a field to MyType:
type MyType
  delta
  pi
end

# and now your function f breaks silently!

Being explicit with what you unpack from MyType, using e.g. @unpack, avoids 
this.
)

On Fri, 2016-09-30 at 00:53, Marius Millea  wrote:
> I think there's at least once scenario where eval-in-a-macro is not a
> mistake, mainly when you want to generate some code that depends on 1) some
> passed in expression and 2) something which can only be known at runtime.
> Here's my example:
>
> The macro (@self) which I'm writing takes a type name and a function
> definition, and gives the function a "self" argument of that type and
> rewrites all occurrences of the type's fields, X, to self.X. Effectively it
> takes this:
>
> type MyType
> x
> end
>
> @self MyType function inc()
> x += 1
> end
>
> and spits out:
>
> function inc(self::MyType)
> self.x += 1
> end
>
> (if this sounds familiar, its because I've discussed it here before, which
> spun off this , that I'm
> currently working on tweaking)
>
>
> To do this my code needs to modify the function expression, but this
> modification depends on fieldnames(MyType), which can *only* be known at
> runtime. Hence what I'm doing is,
>
> macro self(typ,func)
>
> function modify_func(fieldnames)
> # in here I have access to `func` expression *and*
> `fieldnames(typ)` as evaluated at runtime
> # return modified func expression
> end
>
> quote
> $(esc(:eval))($modify_func(fieldnames($(esc(typ)
> end
>
> end
>
> I don't see a cleaner way of doing this, but I'm happy to take suggestions.
>
> (Btw: my original question was w.r.t. that last "eval', which makes it so
> that currently this doesn't work on function closures. I'm still processing
> the several suggestions in this context...)
>
>
> On Tuesday, September 27, 2016 at 5:12:44 PM UTC+2, Steven G. Johnson wrote:
>>
>> On Tuesday, September 27, 2016 at 10:27:59 AM UTC-4, Stefan Karpinski
>> wrote:
>>>
>>> But note that if you want some piece of f to be "pasted" from the user
>>> and have access to certain parts of the local state of f, it's probably a
>>> much better design to let the user pass in a function which f calls,
>>> passing the function the state that it should have access to:
>>>
>>
>> Right; using "eval" in a function is almost always a mistake, an
>> indication that you should really be using a higher-order function.
>> 
>>


Re: [julia-users] Julia-i18n logo proposal

2016-09-30 Thread henri.gir...@gmail.com
REally excellent your icon I adopt it straight for my julia icon with 
the other !


Best

Henri


Le 30/09/2016 à 02:47, Waldir Pimenta a écrit :
Hi all. I made a proposal for the logo for the Julia-i18n 
organization: http://imgh.us/julia-i18n_1.svg


It uses the three most used scripts worldwide, and the characters are 
actually the start of the word “Julia” as written in each of those 
scripts.


Looking forward to know what you guys think.

--Waldir




Re: [julia-users] Re: translating julia

2016-09-30 Thread henri.gir...@gmail.com

 Hi Ismael

That's it. I8 I have a little github with forks to put my examples. I 
read gitter and saw that most of translation was on stating now.


I found the search field on translation page.

Thanks

Best Henri


Le 29/09/2016 à 23:48, Ismael Venegas Castelló a écrit :

Hello Henri!

Just a question, you are aishenri in GitHub right? Because I answered 
this same question at julia-i18n chat room at Gitter, just want to 
make sure, I don't want to leave any question unanswered.


  * https://gitter.im/JuliaLangEs/julia-i18n?at=57ed5568be5dec755007a21c


Cheers
Ismael Venegas Castelló


El jueves, 29 de septiembre de 2016, 2:19:46 (UTC-5), Henri Girard 
escribió:


Hi Ismael,

I would like to translate "home" first,  but I noticed it's
difficult to find all text about it ?

Is there a way to research a precise text ?

I already translated a good part of it but I need to see it so
that I can be sûre of that.


Best
Henri
Le 28/09/2016 à 11:54, henri@gmail.com  a écrit :


I am wondering if one must write the html mark when translating ?


Le 28/09/2016 à 10:26, Ismael Venegas Castelló a écrit :

Hello Henri!

Currently French is about 0% translated, we are adding to
production the languages that at minimum have the home page
translated 90 %, but you can see the current progress of all the
languages in the staging site here:

  * http://julialanges.github.io

You can see here a video I did tonight, were I am translating,
in order to give a taste of the workflow involved in doing this
with Transifex Live.

Please let me know if you have any doubt and I will gladly help
you as much as I can and thank you very much for your interest
and support to this project.

Cheers,
Ismael Venegas Castelló

El miércoles, 28 de septiembre de 2016, 2:49:32 (UTC-5), Henri
Girard escribió:

I went to french translation startet to translate but can't
see it on julialang.org  ?
it's not actualized ?
Regards









Re: [julia-users] Re: Any 0.5 performance tips?

2016-09-30 Thread Mauro
On Fri, 2016-09-30 at 03:45, Andrew  wrote:
> I checked, and my objective function is evaluated exactly as many times
> under 0.4 as it is under 0.5. The number of iterations must be the same.
>
> I also looked at the times more precisely. For one particular function call
> in the code, I have:
>
> 0.4 with old code: 6.7s 18.5M allocations
> 0.4 with 0.5 style code(regular anonymous functions) 11.6s, 141M
> allocations
> 0.5: 36.2s, 189M allocations
>
> Surprisingly, 0.4 is still much faster even without the fast anonymous
> functions trick. It doesn't look like 0.5 is generating many more
> allocations than 0.4 on the same code, the time is just a lot slower.

Sounds like your not far off a minimal, working example.  Post it and
I'm sure it will be dissected in no time. (And an issue can be filed).

> On Thursday, September 29, 2016 at 3:36:46 PM UTC-4, Tim Holy wrote:
>>
>> No real clue about what's happening, but my immediate thought was that if
>> your algorithm is iterative and uses some kind of threshold to decide
>> convergence, then it seems possible that a change in the accuracy of some
>> computation might lead to it getting "stuck" occasionally due to roundoff
>> error. That's probably more likely to happen because of some kind of
>> worsening rather than some improvement, but either is conceivable.
>>
>> If that's even a possible explanation, I'd check for unusually-large
>> numbers of iterations and then print some kind of convergence info.
>>
>> Best,
>> --Tim
>>
>> On Thu, Sep 29, 2016 at 1:21 PM, Andrew 
>> wrote:
>>
>>> In the 0.4 version the above times are pretty consistent. I never observe
>>> any several thousand allocation calls. I wonder if compilation is occurring
>>> repeatedly.
>>>
>>> This isn't terribly pressing for me since I'm not currently working on
>>> this project, but if there's an easy fix it would be useful for future work.
>>>
>>> (sorry I didn't mean to post twice. For some reason hitting spacebar was
>>> interpreted as the post command?)
>>>
>>>
>>> On Thursday, September 29, 2016 at 2:15:35 PM UTC-4, Andrew wrote:

 I've used @code_warntype everywhere I can think to and I've only found
 one Core.box. The @code_warntype looks like this

 Variables:
   #self#::#innerloop#3133{#bellman_obj}
   state::State{IdioState,AggState}
   EVspline::Dierckx.Spline1D
   model::Model{CRRA_Family,AggState}
   policy::PolicyFunctions{Array{Float64,6},Array{Int64,6}}
   OO::NW

 #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}

 Body:
   begin

 #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
 = $(Expr(:new,
 ##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj},
 :(state), :(EVspline), :(model), :(policy), :(OO),
 :((Core.getfield)(#self#,:bellman_obj)::#bellman_obj)))
   SSAValue(0) =
 #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}

 (Core.setfield!)((Core.getfield)(#self#::#innerloop#3133{#bellman_obj},:obj)::CORE.BOX,:contents,SSAValue(0))::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
   return SSAValue(0)

 end::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}


 I put the CORE.BOX in all caps near the bottom.

 I have no idea if this is actually a problem. The return type is stable.
 Also, this function appears in an outer loop.

 What I noticed putting a @time in places is that in 0.5, occasionally
 calls to my nonlinear equation solver take a really long time, like here:

   0.069224 seconds (9.62 k allocations: 487.873 KB)
   0.07 seconds (39 allocations: 1.922 KB)
   0.06 seconds (29 allocations: 1.391 KB)
   0.11 seconds (74 allocations: 3.781 KB)
   0.09 seconds (54 allocations: 2.719 KB)
   0.08 seconds (54 allocations: 2.719 KB)
   0.08 seconds (49 allocations: 2.453 KB)
   0.07 seconds (44 allocations: 2.188 KB)
   0.07 seconds (44 allocations: 2.188 KB)
   0.06 seconds (39 allocations: 1.922 KB)
   0.07 seconds (39 allocations: 1.922 KB)
   0.06 seconds (39 allocations: 1.922 KB)
   0.05 seconds (34 allocations: 1.656 KB)
   0.05 seconds (34 allocations: 1.656 KB)
   0.04 seconds (29 allocations: 1.391 KB)
   0.04 seconds (24 

[julia-users] Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

2016-09-30 Thread Kyunghun Kim
Good news!
I had wished there's would be some integration in several CUDA packages. 

By the way, is there's any plan for 'standard' GPU array type, such 
as https://github.com/JuliaGPU/GPUArrays.jl ?
CUDArt, CUDAdrv has its own CUDA array type and there's package such as 
ArrayFire.jl 

For example, if I add package wrapping new NVIDIA library such as cuRAND, 
which GPU array type should I support in that package? 

Best, 
Kyunghun