Yup, I like that idea too. Multiple dispatch is quite useful here. This is
my implementation.
abstract UtilityFunction
immutable CRRA <: UtilityFunction
sigmac::Float64
sigmal::Float64
psi::Float64
end
immutable LogUtility <: UtilityFunction
end
function u(UF::CRRA,consump,labor)
sigmac = UF.sigmac
sigmal = UF.sigmal
psi = UF.psi
(consump > 0 && labor <1) && return consump.^(1-sigmac)/(1-sigmac) + psi
*(1-labor).^(1-sigmal)/(1-sigmal)
return -Inf
end
function u(UF::LogUtility,consump,labor)
consump > 0 && return log(consump)
return -Inf
end
function test1(UF::UtilityFunction)
for i = 1:1000000
u(UF,-1. + 1/i, .5)
end
end
function test2(UF::UtilityFunction)
for i = 1:1000000
u(UF,-1. + 1/i ,.5)
end
end
UF1 = CRRA(4,2,1)
UF2 = LogUtility()
@time test1(UF1)
@time test2(UF2)
elapsed time: 0.005229738 seconds (80 bytes allocated)
elapsed time: 0.004894504 seconds (80 bytes allocated)
On Tuesday, June 23, 2015 at 8:28:28 PM UTC-4, Colin Bowers wrote:
>
> Yes, that is pretty much how I would do it, although, as I said in my
> previous post, I would set `UtilityFunction` to an abstract type, and then
> define my actual utility function immutable, say `MyCustomUtilityFunc`, as
> a subtype of `UtilityFunction`. That way you can easily add different types
> of utility functions later without having to change your existing code. By
> the way, just for the record, a fair test between the two approaches would
> be as follows:
>
> abstract UtilityFunction
> immutable MyCustomUtilityFunction <: UtilityFunction
> sigmac::Float64
> sigmal::Float64
> psi::Float64
> end
> u4(sigmac, sigmal, psi, consump,labor) = consump.^(1-sigmac)/(1-sigmac) +
> psi*(1-labor).^(1-sigmal)/(1-sigmal)
> u(UF::MyCustomUtilityFunctionconsump,labor) =
> consump.^(1-UF.sigmac)/(1-UF.sigmac) +
> UF.psi*(1-labor).^(1-UF.sigmal)/(1-UF.sigmal)
>
>
> function test1(sigmac, sigmal, psi)
> for i = 1:1000000
> u4(sigmac, sigmal, psi, 1.0 + 1/i, 0.5)
> end
> end
> function test2(UF::UtilityFunction)
> for i = 1:1000000
> u(UF, 1.0 + 1/i , 0.5)
> end
> end
>
> UF = MyCustomUtilityFunction(4,2,1)
> test1(4.0, 2.0, 1.0)
> test2(UF)
>
>
> On my machine that returns:
>
> elapsed time: 0.090409383 seconds (80 bytes allocated)
> elapsed time: 0.091065473 seconds (80 bytes allocated)
>
> ie, no significant performance difference
>
> On 24 June 2015 at 00:56, Andrew <[email protected] <javascript:>> wrote:
>
>> Thanks, this is all very useful. I think I am going to back away from
>> using the @anon functions at the moment, so I'll postpone my idea to
>> encapsulate the functions into a type. Instead, I will just pass a
>> parameter type to an externally defined(not nested) function. I had thought
>> this would be slow (see my question here
>> https://groups.google.com/forum/#!topic/julia-users/6U-otLSx7B0 ), but I
>> did a little testing.
>>
>> immutable UtilityFunction
>> sigmac::Float64
>> sigmal::Float64
>> psi::Float64
>> end
>> function u(UF::UtilityFunction,consump,labor)
>> sigmac = UF.sigmac
>> sigmal = UF.sigmal
>> psi = UF.psi
>> consump.^(1-sigmac)/(1-sigmac) + psi*(1-labor).^(1-sigmal)/(1-sigmal)
>> end
>> function u4(consump,labor)
>> consump.^(1-4)/(1-4) + 1*(1-labor).^(1-2)/(1-2)
>> end
>>
>> function test1(UF)
>> for i = 1:1000000
>> u4(1. + 1/i, .5)
>> end
>> end
>> function test2(UF)
>> for i = 1:1000000
>> u(UF,1. + 1/i ,.5)
>> end
>> end
>> UF = UtilityFunction(4,2,1)
>>
>> @time test1(UF)
>> @time test2(UF)
>>
>> elapsed time: 0.068562617 seconds (80 bytes allocated)
>> elapsed time: 0.139422608 seconds (80 bytes allocated)
>>
>>
>> So, even versus the extreme case where I built the constants into the
>> function, the slowdown is not huge. I asume @anon would have similar
>> performance to the constants built in case, which is nice. However, I want
>> to be able to share my Julia code with others who aren't very experienced
>> with the language, so I'd be uncomfortable asking them to understand the
>> workings of FastAnonymous. It's useful to know about in case I need the
>> speedup in my own personal code though.
>>
>>
>> On Tuesday, June 23, 2015 at 8:51:25 AM UTC-4, [email protected] wrote:
>>>
>>> Yes, this proves to be an issue for me sometimes too. I asked a
>>> StackOverflow question on this topic a few months ago and got a very
>>> interesting response, as well as some interesting links. See here:
>>>
>>>
>>> http://stackoverflow.com/questions/28356437/julia-compiler-does-not-appear-to-optimize-when-a-function-is-passed-a-function
>>>
>>> As a general rule, if the function you are passing round is very simple
>>> and gets called a lot, then you will really notice the performance
>>> overhead. In other cases where the function is more complicated, or is not
>>> called that often, the overhead will be barely measurable.
>>>
>>> If the number of functions that you want to pass around is not that
>>> large, one way around this is to use types and multiple dispatch instead of
>>> functions, eg
>>>
>>> abstract UtilityFunctions
>>> type QuadraticUtility <: UtilityFunctions
>>> a::Float64
>>> b::Float64
>>> c::Float64
>>> end
>>> evaluate(x::Number, f::QuadraticUtility) = f.a*x^2 + f.b*x + f.c
>>>
>>> Now your function would be something like:
>>>
>>> function solveModel(f::UtilityFunctions, ...)
>>>
>>> and you would call evaluate at the appropriate place in the function
>>> body and multiple dispatch will take care of the rest. There is no
>>> performance overhead with this approach.
>>>
>>> Of course, if you want to be able to just pass in any arbitrary function
>>> that a user might think up, then this approach is not tenable.
>>>
>>> On Tuesday, 23 June 2015 01:07:25 UTC+10, Andrew wrote:
>>>>
>>>>
>>>>
>>>> I'm trying to write some abstract Julia code to solve a variety of
>>>> economics models. Julia provides powerful abstraction tools which I think
>>>> makes it very well-suited to this; however, I've read in several places
>>>> that Julia doesn't yet know how to inline functions passed as arguments,
>>>> hence code like
>>>>
>>>> function SolveModel(Utility::Function, ProductionTechnology::Function
>>>> ,...)
>>>> ...
>>>>
>>>> will be slow. I performed this very simple test.
>>>>
>>>> function ftest1()
>>>> u(x) = log(x)
>>>> function hello(fun::Function)
>>>> for i = 1:1000000
>>>> fun(i.^(1/2))
>>>> end
>>>> end
>>>> end
>>>>
>>>> function ftest2()
>>>> function hello()
>>>> for i = 1:1000000
>>>> log(i.^(1/2))
>>>> end
>>>> end
>>>> end
>>>>
>>>> @time ftest1()
>>>> @time ftest2()
>>>>
>>>> elapsed time: 6.065e-6 seconds (496 bytes allocated)
>>>> elapsed time: 3.784e-6 seconds (264 bytes allocated)
>>>>
>>>>
>>>> The inlined version is about twice as fast, which isn't all that bad,
>>>> although I'm not sure if it would be worse in a more complicated example.
>>>> Perhaps I shouldn't worry about this, and should code how I want. I was
>>>> wondering though, if anybody knows when this is going to change. I've read
>>>> about functors, which I don't really understand, but it sounds like people
>>>> are working on this problem.
>>>>
>>>
>