On Mon, Nov 24, 2014 at 3:01 PM, David Smith <[email protected]> wrote:
> To add some data to this conversation, I just timed allocating a billion
> Int64s on my macbook, and I got this (I ran these multiple times before this
> and got similar timings):
>
> julia> N=1_000_000_000
> 1000000000
>
> julia> @time x = Array(Int64,N);
> elapsed time: 0.022577671 seconds (8000000128 bytes allocated)
>
> julia> @time x = zeros(Int64,N);
> elapsed time: 3.95432248 seconds (8000000152 bytes allocated)
>
> So we are talking adding possibly seconds to a program per large array
> allocation.

This is not quite right -- the first does not actually map the pages
into memory; this is only done lazily when they are accessed the first
time. You need to compare "alloc uninitialized; then initialize once"
with "alloc zero-initialized; then initialize again".

Current high-end system architectures have memory write speeds of ten
or twenty GByte per second; this is what you should see for very large
arrays -- this would be about 0.4 seconds for your case. For smaller
arrays, the data would reside in the cache, so that the allocation
overhead should be significantly smaller even.

-erik

-- 
Erik Schnetter <[email protected]>
http://www.perimeterinstitute.ca/personal/eschnetter/

Reply via email to