If this is the case I think there needs to be an update of the benchmarks.

The few second percentages in performance only makes a difference when 
processing massive datasets.

On Monday, December 29, 2014 6:05:21 PM UTC-8, Stefan Karpinski wrote:
>
> I just measured Julia's built-in quicksort against the C quicksort 
> microbenchmark and Julia's built-in quicksort is 25% faster than C at its 
> best optimization setting (which turns out to be -O2). Of course, you can 
> argue that this is unfair because more time and effort has been put into 
> Julia's quicksort than the simple C quicksort for that benchmark. This kind 
> of back and forth can go on ad nauseum and isn't very productive. The 
> message of those benchmarks is that you can write a fast sort in Julia 
> code; obsessing about ±25% is missing the point.
>
> On Mon, Dec 29, 2014 at 8:50 PM, Steven G. Johnson <[email protected] 
> <javascript:>> wrote:
>
>> In general, the philosophy of Julia is to be willing to pay a small price 
>> (up to a factor of 2 compared to C, but usually less) in order to get the 
>> benefit of a high-level, dynamic language with lots of other features that 
>> C lacks (e.g. better error handling).
>>
>> The 24% slowdown in a simple quicksort implementation benchmark is well 
>> within Julia's design tolerances.  And, as I said, the actual sort routine 
>> in the Julia standard library is actually faster than C's actual qsort 
>> routine—unlike synthetic benchmarks, practical sorting routines have to 
>> handle any datatype generically, and here Julia's type-specialization and 
>> JIT compilation shine.
>>
>> In terms of why Go is 1.11 and Julia is 1.24 times C, when it comes to 
>> these tiny differences you really need to at micro-level optimizations like 
>> how bounds-checking or inlining is done, but frankly getting another 10% 
>> here is a pretty low priority.  It's much more important to improve 
>> performance in other areas where Julia isn't yet so close to C for 
>> straightforward code.
>>
>
>

Reply via email to