I actually kind of figured that out. The constructor function can wrap and 
return whatever it wants in Elm. I think we'll still need a native module 
for timing, though.

On Friday, January 27, 2017 at 4:32:32 PM UTC-6, Joey Eremondi wrote:
>
> I have some thoughts on how to fix the effectful functions… namely, 
>> lifting them to Tasks. But that doesn't seem like quite the right approach, 
>> since Tasks can fail and these can't. This makes me think that an effect 
>> module may be the best way to handle this.
>>
>
> Couldn't you just make them of type "Task Never Foo", and then you could 
> use them knowing they wouldn't fail? That seems preferable to having Native 
> dependencies to me.
>
> I think you could get away without Native otherwise. For example, if your 
> Benchmark internally stored functions of "() -> a" into "() -> ()" by 
> giving them to "\f -> let _ = f () in ()", you could compile suites in a 
> typesafe way.
>
> On Fri, Jan 27, 2017 at 2:09 PM, Brian Hicks <[email protected]> 
> wrote:
>
>> *Summary:* I'd like a benchmarking library to use in Elm with no 
>> JavaScript dependencies. The sketch I have so far requires native code. Now 
>> that I've proved the concept to myself, I'd like to discuss the best API.
>>
>> I've created the following functions, backed by native code (using 
>> performance.now):
>>
>>     type Benchmark = Benchmark
>>
>>     benchmark (() -> a) -> Benchmark
>>
>>     time : Benchmark -> Float
>>
>> It does about what you'd expect, except `benchmark` removes the function 
>> type signatures so you can have a list of benchmarks in a suite, without 
>> having to make them all the same type. I think I can build a reasonable 
>> benchmarking library on top of this.
>>
>> The first reasonable thing to do is run a number of times and return 
>> statistics (I've started with mean runtime.) That would be `runTimes : Int 
>> -> Benchmark -> Float`.
>>
>> My next thought is to try and box these into a set runtime. I've 
>> implemented this as `run : Float -> Benchmark -> ( Int, Float )`. The tuple 
>> is `( sample size, mean runtime)`.
>>
>> Now, a third thought… and this is where I'd like feedback. The 
>> implementation so far has some problems:
>>
>> - The Benchmark type removes type information from the functions under 
>> consideration. I can think of ways to get around it in pure Elm (e.g. 
>> making a constructor using `always SomeStubType`.) But it just doesn't feel 
>> right.
>> - The `time` and `run*` functions are effectful. Not that they can't be, 
>> if we're going to benchmark properly.
>>
>> I have some thoughts on how to fix the effectful functions… namely, 
>> lifting them to Tasks. But that doesn't seem like quite the right approach, 
>> since Tasks can fail and these can't. This makes me think that an effect 
>> module may be the best way to handle this.
>>
>> Either approach would allow us to use the existing composition functions 
>> (andThen, batch, etc) to compose benchmarks. We may also be able to, if 
>> using an effects module, "warm up" a particular benchmark before the first 
>> use for better accuracy when timeboxing.
>>
>> This library is already going to require an exception for native code no 
>> matter what because of `time`. I'd rather get feedback early before I go in 
>> one direction or the other for the API. So, what do y'all think?
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Elm Discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups "Elm 
Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to