The problem with using a macro is that you will always have to make a local 
copy of the data, if it was a language feature then then a mutable type 
could be passed in as the argument and the same non-obfuscated code could 
be used to update the state in place, which may be preferable depending on 
the situation.

Here is another example from the Julia.org webpage:

immutable Pixel
    r::Uint8
    g::Uint8
    b::Uint8
end

function rgb2gray!(img::Array{Pixel})
    for i=1:length(img)
        p = img[i]
        v = uint8(0.30*p.r + 0.59*p.g + 0.11*p.b)
        img[i] = Pixel(v,v,v)
    end
end

function rgb2gray2!(img::Array{Pixel})
    for i=1:length(img)
        using img[i]
        v = uint8(0.30*r + 0.59*g + 0.11*b)
        img[i] = Pixel(v,v,v)
    end
end



On Thursday, June 12, 2014 12:30:05 PM UTC+8, Keno Fischer wrote:
>
> I don't think it warrants syntax, but might be nice in a macro. I've had 
> cases where I just put my entire simulation state in a single object, so I 
> don't need to give 100s of parameters to every object. In that case (where 
> the object is more of a container than an abstraction), it might be nice to 
> use. 
>
>
> On Thu, Jun 12, 2014 at 12:13 AM, Andrew Simper <[email protected] 
> <javascript:>> wrote:
>
>> So just to post again to make things clearer, right now algorithms tend 
>> to look pretty ugly and obfuscated since you have to prefix function 
>> arguments with the argument names using dot notation:
>>
>> function tick (state::SvfSinOsc, coef::SvfSinOscCoef)
>>     local v1::Float64 = coef.g0*state.ic1eq - coef.g1*state.ic2eq
>>     local v2::Float64 = coef.g1*state.ic1eq + coef.g0*state.ic2eq
>>     SvfSinOsc (2*v1 - state.ic1eq, 2*v2 - state.ic2eq)
>> end
>>
>>
>> This is a lot more readable to me, and it would be super useful to have a 
>> "using" type operation similar to namespace but it could run on variables 
>> instead, so that although writing the following is equivalent to what is 
>> above, it is much easier to see what is going on:
>>
>> function tick (state::SvfSinOsc, coef::SvfSinOscCoef)
>>     using state, coef
>>     local v1::Float64 = g0*ic1eq - g1*ic2eq
>>     local v2::Float64 = g1*ic1eq + g0*ic2eq
>>     SvfSinOsc (2*v1 - ic1eq, 2*v2 - ic2eq)
>> end
>>
>> What are peoples opinions on this? Would anyone else find it useful?
>>
>>
>>
>> On Friday, June 6, 2014 3:17:31 PM UTC+8, Andrew Simper wrote:
>>>
>>> In implementations where you want named data, I've noticed that the 
>>> algorithm gets obfuscated by lots of variable names with dots after them. 
>>> For example, here is a basic analog model of a state variable filter used 
>>> as a sine wave generator:
>>>
>>> immutable SvfSinOscCoef
>>>     g0::Float64
>>>     g1::Float64
>>> end
>>> immutable SvfSinOsc
>>>     ic1eq::Float64
>>>     ic2eq::Float64
>>> end
>>> function SvfSinOscCoef_Init (;freq=1.0, sr=44100.0)    
>>>     local g::Float64 = tan (2pi*freq/sr)
>>>     local g0 = 1.0/(1.0+g^2)
>>>     SvfSinOscCoef (g0,g*g0)
>>> end
>>> function SvfSinOsc_Init (startphase::Float64)
>>>     SvfSinOsc (cos(startphase), sin(startphase))
>>> end
>>>
>>> But the tick function looks a bit messy:
>>>
>>> function tick (state::SvfSinOsc, coef::SvfSinOscCoef)
>>>     local v1::Float64 = coef.g0*state.ic1eq - coef.g1*state.ic2eq
>>>     local v2::Float64 = coef.g1*state.ic1eq + coef.g0*state.ic2eq
>>>     SvfSinOsc (2*v1 - state.ic1eq, 2*v2 - state.ic2eq)
>>> end
>>>
>>>
>>> It would be really cool if there was a way to shorthand the syntax of 
>>> this to something like the following, which is a lot more readable:
>>>
>>> function tick (state::SvfSinOsc, coef::SvfSinOscCoef)
>>>     using s, c
>>>     local v1::Float64 = g0*ic1eq - g1*ic2eq
>>>     local v2::Float64 = g1*ic1eq + g0*ic2eq
>>>     SvfSinOsc (2*v1 - ic1eq, 2*v2 - ic2eq)
>>> end
>>>
>>>
>>> Lots of algorithms have arguments with the same type, but even then you 
>>> could still specify using just the most used argument, but if it doesn't 
>>> help make things more clear or isn't useful then people don't have to use 
>>> it at all.
>>>
>>>
>>>
>>> Another pattern that would be nice to handle cleanly is: fetch state to 
>>> local, compute on local, store local to state. I have written code that 
>>> generates code to handle this since it is such a pain to keep everything in 
>>> sync, but if there was some way to automate this at the language level then 
>>> it would really rock, so here is an example of the longhand way, which 
>>> isn't too bad for this example, but just imagine if there are 20 or so 
>>> variables, and you are writing multiple tick functions:
>>>
>>> type SvfSinOsc
>>>     ic1eq::Float64
>>>     ic2eq::Float64
>>> end
>>>
>>> function tick (state::SvfSinOsc, coef::SvfSinOscCoef)
>>>     local ic1eq::Float64 = state.ic1eq
>>>     local ic2eq::Float64 = state.ic2eq
>>>     for i = 1:100
>>>         # compute algorithm using local copies of state.ic1eq and 
>>> state.ic2eq
>>>     end
>>>     state.ic1eq = ic1eq
>>>     state.ic2eq = ic2eq
>>>     return state
>>> end
>>>
>>>
>>> I have a feeling that macros may be able to help out here to result in 
>>> something like:
>>>
>>> function tick (state::SvfSinOsc, coef::SvfSinOscCoef)
>>>     @fetch state
>>>     for i = 1:100
>>>         # compute iterative algorithm using local copies of state.ic1eq 
>>> and state.ic2eq
>>>     end
>>>     @store state
>>>     return state
>>> end
>>>
>>> But I'm not sure how to code such a beast, I tried something like:
>>>
>>>  macro fetch(obj::SvfSinOsc)
>>>     return quote
>>>         local ic1eq = obj.ic1eq
>>>         local ic2eq = obj.ic2eq
>>>     end
>>> end
>>>
>>> macro store(obj::SvfSinOsc)
>>>     return quote
>>>         obj.ic1eq = ic1eq
>>>         obj.ic2eq = ic2eq
>>>     end
>>> end
>>>
>>> dump(osc)
>>> macroexpand (:(@fetch osc))
>>> macroexpand (:(@store osc))
>>>
>>> SvfSinOsc 
>>>   ic1eq: Float64 1.0
>>>   ic2eq: Float64 0.0
>>>
>>>
>>> Out[28]: :($(Expr(:error, TypeError(:anonymous,"
>>> typeassert",SvfSinOsc,:osc))))
>>>
>>>
>>>  
>>>
>>>
>>>
>>>
>>>
>

Reply via email to