If anyone's interested, using a generic instead of anonymous function
reduces the penalty to ~10x
julia> function f3(g,a)
pairit(x) = Pair(x,g)
map(pairit, a)
end
f3 (generic function with 1 method)
julia> @time f3(2,ones(1_000_000));
108.542 milliseconds (2082 k allocations: 73928 KB, 13.02% gc time)
julia> @time f3(2,ones(1_000_000));
58.142 milliseconds (2000 k allocations: 70313 KB, 42.84% gc time)
julia> @time f3(2,ones(1_000_000));
39.441 milliseconds (2000 k allocations: 70313 KB, 12.40% gc time)
On Wednesday, June 17, 2015 at 12:52:34 PM UTC-4, Seth wrote:
>
> Gah. I'm sorry. I can't reproduce my original results! I don't know why,
> but the same tests I ran two days ago are not giving me the same timing. I
> need to go back to the drawing board here.
>
> On Wednesday, June 17, 2015 at 11:37:52 AM UTC-5, Josh Langsfeld wrote:
>>
>> For me, map is 100x slower:
>>
>> julia> function f1(g,a)
>> [Pair(x,g) for x in a]
>> end
>> f1 (generic function with 1 method)
>>
>>
>> julia> function f2(g,a)
>> map(x->Pair(x,g), a)
>> end
>> f2 (generic function with 1 method)
>>
>>
>> julia> @time f1(2,ones(1_000_000));
>> 25.158 milliseconds (28491 allocations: 24736 KB, 12.69% gc time)
>>
>>
>> julia> @time f1(2,ones(1_000_000));
>> 6.866 milliseconds (8 allocations: 23438 KB, 37.10% gc time)
>>
>>
>> julia> @time f1(2,ones(1_000_000));
>> 6.126 milliseconds (8 allocations: 23438 KB, 25.99% gc time)
>>
>>
>> julia> @time f2(2,ones(1_000_000));
>> 684.994 milliseconds (2057 k allocations: 72842 KB, 1.72% gc time)
>>
>>
>> julia> @time f2(2,ones(1_000_000));
>> 647.267 milliseconds (2000 k allocations: 70313 KB, 3.64% gc time)
>>
>>
>> julia> @time f2(2,ones(1_000_000));
>> 633.149 milliseconds (2000 k allocations: 70313 KB, 0.91% gc time)
>>
>>
>> On Wednesday, June 17, 2015 at 12:04:52 PM UTC-4, Seth wrote:
>>>
>>> Sorry - it's part of a function:
>>>
>>> in_edges(g::AbstractGraph, v::Int) = [Edge(x,v) for x in badj(g,v)]
>>>
>>> vs
>>>
>>> in_edges(g::AbstractGraph, v::Int) = map(x->Edge(x,v), badj(g,v))
>>>
>>>
>>>
>>>
>>> On Wednesday, June 17, 2015 at 10:51:22 AM UTC-5, Mauro wrote:
>>>>
>>>> Note that inside a module is also global scope as each module has its
>>>> own global scope. Best move it into a function. M
>>>>
>>>> On Wed, 2015-06-17 at 17:22, Seth <[email protected]> wrote:
>>>> > The speedups are both via the REPL (global scope?) and inside a
>>>> module. I
>>>> > did a code_native on both - results are
>>>> > here: https://gist.github.com/sbromberger/b5656189bcece492ffd9.
>>>> >
>>>> >
>>>> >
>>>> > On Wednesday, June 17, 2015 at 9:56:22 AM UTC-5, Stefan Karpinski
>>>> wrote:
>>>> >>
>>>> >> I would have expected the comprehension to be faster. Is this in
>>>> global
>>>> >> scope? If so you may want to try the speed comparison again where
>>>> each of
>>>> >> these occur in a function body and only depend on function
>>>> arguments.
>>>> >>
>>>> >> On Tue, Jun 16, 2015 at 10:12 AM, Seth <[email protected]
>>>> >> <javascript:>> wrote:
>>>> >>
>>>> >>> I have been using list comprehensions of the form
>>>> >>> bar(g, a) = [Pair(x, g) for x in a] and [foo(x) for x in a]
>>>> >>>
>>>> >>> but recently evaluated bar(g, a) = map(x->Pair(x, g),a) and
>>>> >>> map(x->foo(x),a)as substitutes.
>>>> >>>
>>>> >>> It seems from some limited testing that map is slightly faster than
>>>> the
>>>> >>> list comprehension, but it's on the order of 3-4% so it may just be
>>>> noise.
>>>> >>> Allocations and gc time are roughly equal (380M allocations,
>>>> ~27000MB, ~6%
>>>> >>> gc).
>>>> >>>
>>>> >>> Should I prefer one approach over the other (and if so, why)?
>>>> >>>
>>>> >>> Thanks!
>>>> >>>
>>>> >>
>>>> >>
>>>>
>>>>