>
> I think you should assume you have to perform branching at run-time 
> whenever you want to condition behavior on values (like -1 vs. +1) rather 
> than on types (like Float64 vs. Int64). 
>
> So I think it’s going to be very difficult to achieve the goal you have in 
> mind. 
>

Thanks for your reply!

In C++ I do not see any problem implementing this, using virtual methods 
with explicitly specified return types. As in Julia return types are 
inferred rather than specified, I don't see this possibility, but I was 
wondering if I am missing something. I might indeed just go with macros.

Jan

On Jul 30, 2014, at 3:00 AM, Jan Drugowitsch <[email protected] <javascript:>> 
> wrote: 
>
> > Hi all, 
> > 
> > I've been recently running into a presumably frequently occurring 
> problem, and was wondering what the best way is to solve this in Julia. The 
> problem is how to dynamically choose amongst a set of functions based on 
> some constant parameter value (without the use of macros, which usually 
> make my head hurt). A use case would be the sampler for a probability 
> distribution for which several algorithms exist, and whose choice depends 
> on the parameters of this distribution. Given that this distribution is 
> implemented as an immutable class, one would want to pick the best 
> algorithm at construction of a class instance rather than evaluating it at 
> every time the sampler is called. 
> > 
> > More specifically, an abstract example is to have some function 
> f(a::BaseA, x::Number) that chooses among 
> > 
> > f1(x::Number) = x + 10 
> > f2(x::Number) = x - 10 
> > 
> > based on if some parameter y, used to construct BaseA(y), is positive of 
> negative. In addition, we would like to add generic vectorization, akin to 
> > 
> > abstract BaseA 
> > f{T <: Number}(a::BaseA, x::Vector{T}) = [f(a, xi) for xi in x] 
> > 
> > The fastest approach might be 
> > 
> > immutable A0 <: BaseA 
> >     y::Float64 
> >     A0(y::Number) = new(float(y)) 
> > end 
> > f(a::A0, x::Number) = a.y >= 0.0 ? f1(x) : f2(x) 
> > f{T <: Number}(a::A0, x::Vector{T}) = a.y >= 0.0 ? [f1(xi) for xi in x] 
> : [f2(xi) for xi in x] 
> > 
> > that is, re-implementing the vectorization, to avoid a.y >= 0.0 to be 
> called for every vector element. This seems to defeat the purpose of 
> writing generic methods. A second possibility is to perform the comparison 
> of y at every iteration by 
> > 
> > immutable A1 <: BaseA 
> >     y::Float64 
> >     A1(y::Number) = new(float(y)) 
> > end 
> > f(a::A1, x::Number) = a.y >= 0.0 ? f1(x) : f2(x) 
> > 
> > For both implementation I find around the same run-time (vector of size 
> 10^8, no gc): 
> > 
> > A0: elapsed time: 0.687105729 seconds (1600000096 bytes allocated) 
> > A1: elapsed time: 0.696713456 seconds (1600000096 bytes allocated) 
> > 
> > However, I don't find either of them particularly satisfactory. A third 
> option would be to choose the function at construction of a: 
> > 
> > immutable A2 <: BaseA 
> >     f::Function 
> >     A2(y::Number) = new(y >= zero(y) ? f1 : f2) 
> > end 
> > f(a::A2, x::Number) = a.f(x) 
> > 
> > However, this is slow (elapsed time: 25.70402182 seconds (7999991920 
> bytes allocated)), and defeats type inference (returns Array{Any,10^8}). As 
> last option would be to construct instances of different classes, depending 
> on y: 
> > 
> > immutable _A3f1 <: BaseA; end 
> > f(a::_A3f1, x::Number) = f1(x) 
> > immutable _A3f2 <: BaseA; end 
> > f(a::_A3f2, x::Number) = f2(x) 
> > A3(y::Number) = y >= zero(y) ? _A3f1() : _A3f2() 
> > 
> > This is (surprisingly) still slower than A0 and A1 (elapsed time: 
> 1.360536247 seconds (1600000096 bytes allocated)), and just shifts the 
> problem of type inference one level higher. 
> > 
> > Are there any better solutions to tackle this, or are macros the only 
> way? 
> > 
> > Thanks for your helps, 
> > Jan 
>
>

Reply via email to