It appears the fill operation accounts for about 0.15 seconds of the 6.15
seconds that my OS X laptop takes to create this array:

$ ./julia -q

*julia> **N=10^9*

*1000000000*


*julia> **@time begin x=zeros(Int64,N); fill(x,0) end*

elapsed time: 6.325660691 seconds (8000136616 bytes allocated, 1.71% gc
time)

*0-element Array{Array{Int64,1},1}*


$ ./julia -q

*julia> **N=10^9*

*1000000000*


*julia> **@time x=zeros(Int64,N)*

elapsed time: 6.160623835 seconds (8000014320 bytes allocated, 0.22% gc
time)



On Mon Nov 24 2014 at 3:18:39 PM Erik Schnetter <[email protected]>
wrote:

> On Mon, Nov 24, 2014 at 3:01 PM, David Smith <[email protected]>
> wrote:
> > To add some data to this conversation, I just timed allocating a billion
> > Int64s on my macbook, and I got this (I ran these multiple times before
> this
> > and got similar timings):
> >
> > julia> N=1_000_000_000
> > 1000000000
> >
> > julia> @time x = Array(Int64,N);
> > elapsed time: 0.022577671 seconds (8000000128 bytes allocated)
> >
> > julia> @time x = zeros(Int64,N);
> > elapsed time: 3.95432248 seconds (8000000152 bytes allocated)
> >
> > So we are talking adding possibly seconds to a program per large array
> > allocation.
>
> This is not quite right -- the first does not actually map the pages
> into memory; this is only done lazily when they are accessed the first
> time. You need to compare "alloc uninitialized; then initialize once"
> with "alloc zero-initialized; then initialize again".
>
> Current high-end system architectures have memory write speeds of ten
> or twenty GByte per second; this is what you should see for very large
> arrays -- this would be about 0.4 seconds for your case. For smaller
> arrays, the data would reside in the cache, so that the allocation
> overhead should be significantly smaller even.
>
> -erik
>
> --
> Erik Schnetter <[email protected]>
> http://www.perimeterinstitute.ca/personal/eschnetter/
>

Reply via email to