Nathan Tippy wrote them. I just ran the setup.sh script after installing OpenJDK 7 on the Ubuntu machine. I'm not sure how much tweaking is reasonable to allow for Java. For example, it's unreasonable to build a custom NumPy that uses a better BLAS – because most people just won't do that. It's the responsibility of each system to have good defaults.
> On Mar 30, 2014, at 3:15 PM, Jake Bolewski <[email protected]> wrote: > > How are you running the benchmarks? These types of micro-benchmarks in Java > are really difficult to get right. > >> On Sunday, March 30, 2014 1:25:15 PM UTC-4, Stefan Karpinski wrote: >> Ok, the Java mandel benchmark code cheats by manually inlining and strength >> reducing all the operations on complex numbers. This benchmark needs to use >> a complex number type like everyone else. >> >> >>> On Sun, Mar 30, 2014 at 1:18 PM, Stefan Karpinski <[email protected]> >>> wrote: >>> Ok, this may be more interesting than I thought, but not in a good way for >>> Java. Here are some preliminary results after getting this to run on our >>> benchmark machine: >>> >>> benchmark C Java relative >>> 1 fib 0.07081 2.792345 39.43433130913713 >>> 2 parse_int 0.231028 4.104516 17.766314039856642 >>> 3 mandel 0.416994 0.272333 0.6530861355319262 >>> 4 quicksort 0.600815 1.850908 3.080662100646622 >>> 5 pi_sum 55.112839 55.533128 1.0076259725977825 >>> 6 rand_mat_stat 16.684055 63.864488 3.827875657326711 >>> 7 rand_mat_mul 106.070995 614.264555 5.791069981006589 >>> 8 printfd 27.725935 145.029354 5.230819231163891 >>> >>> Those are some rough numbers for Java. It's getting clobbered by C, >>> Fortran, Julia, Go, JavaScript and sometimes even Python. Brutal. We >>> definitely need some Java pros to take a look at the code and make sure >>> it's a fair comparison. The mandel result is also suspicious because it >>> doesn't seem reasonable that Java can be beating C and Fortran by that much. >>> >>> >>>> On Sun, Mar 30, 2014 at 11:43 AM, Stefan Karpinski <[email protected]> >>>> wrote: >>>> I merged the Java benchmarks, but couldn't get them to run: >>>> https://github.com/JuliaLang/julia/issues/6317. If anyone is a Java pro >>>> and wants to take a crack at this, that would be most appreciated. >>>> >>>> >>>>> On Sun, Mar 30, 2014 at 11:10 AM, Isaiah Norton <[email protected]> >>>>> wrote: >>>>> No. http://docs.julialang.org/en/latest/manual/performance-tips/ >>>>> >>>>> >>>>>> On Sun, Mar 30, 2014 at 11:06 AM, Freddy Chua <[email protected]> wrote: >>>>>> I did some simple benchmark on for loop >>>>>> >>>>>> a=0 >>>>>> for i=1:1000000000 >>>>>> a+=1 >>>>>> end >>>>>> >>>>>> The C equivalent runs way faster... does that mean julia is slow on >>>>>> loops ? >>>>>> >>>>>>> On Sunday, March 30, 2014 10:06:20 PM UTC+8, Isaiah wrote: >>>>>>> https://github.com/JuliaLang/julia/tree/master/test/perf >>>>>>> >>>>>>> > I also wonder why no tests were done with Java.. >>>>>>> >>>>>>> There is an open PR for Java, which you could check out and try: >>>>>>> >>>>>>> https://github.com/JuliaLang/julia/pull/5260 >>>>>>> >>>>>>> >>>>>>> >>>>>>>> On Sun, Mar 30, 2014 at 9:55 AM, Freddy Chua <[email protected]> wrote: >>>>>>>> Hi, >>>>>>>> >>>>>>>> I wonder where can I download the source code of these benchmarks, I >>>>>>>> want to try it on my own... I also wonder why no tests were done with >>>>>>>> Java.. >>>>>>>> >>>>>>>> >>>>>>>> Fortran Julia Python R Matlab Octave Mathe-matica >>>>>>>> JavaScript Go >>>>>>>> gcc 4.8.1 0.2 2.7.3 3.0.2 R2012a 3.6.4 8.0 V8 >>>>>>>> 3.7.12.22 go1 >>>>>>>> fib 0.26 0.91 30.37 411.36 1992.00 3211.81 64.46 2.18 >>>>>>>> 1.03 >>>>>>>> parse_int 5.03 1.60 13.95 59.40 1463.16 7109.85 >>>>>>>> 29.54 2.43 4.79 >>>>>>>> quicksort 1.11 1.14 31.98 524.29 101.84 1132.04 35.74 >>>>>>>> 3.51 1.25 >>>>>>>> mandel 0.86 0.85 14.19 106.97 64.58 316.95 6.07 3.49 >>>>>>>> 2.36 >>>>>>>> pi_sum 0.80 1.00 16.33 15.42 1.29 237.41 1.32 0.84 >>>>>>>> 1.41 >>>>>>>> rand_mat_stat 0.64 1.66 13.52 10.84 6.61 14.98 4.52 >>>>>>>> 3.28 8.12 >>>>>>>> rand_mat_mul 0.96 1.01 3.41 3.98 1.10 3.41 1.16 >>>>>>>> 14.60 8.51 >>
