----- Mail original -----
> De: "David Edelsohn" <dje....@gmail.com>
> À: "PIERRE AUGIER" <pierre.aug...@univ-grenoble-alpes.fr>
> Cc: "pypy-dev" <pypy-dev@python.org>
> Envoyé: Lundi 21 Décembre 2020 23:47:22
> Objet: Re: [pypy-dev] Differences performance Julia / PyPy on very similar 
> codes

> You did not state on exactly what system you are conducting the
> experiment, but "a factor of 4" seems very close to the
> auto-vectorization speedup of a vector of floats.

I wrote another very simple benchmark that should not depend on 
auto-vectorization. The bench function is:

```python
def sum_x(positions):
    result = 0.0
    for i in range(len(positions)):
        result += positions[i].x
    return result
```

The scripts are:

- https://github.com/paugier/nbabel/blob/master/py/microbench_sum_x.py
- https://github.com/paugier/nbabel/blob/master/py/microbench_sum_x.jl

Even on this case, Julia is again notably (~2.7 times) faster on this case:

```
$ julia microbench_sum_x.jl                                                   
  1.208 μs (1 allocation: 16 bytes)

In [1]: run microbench_sum_x.py
sum_x(positions)
3.29 µs ± 133 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
sum_x(positions_list)
14.5 µs ± 291 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
```

For `positions_list`, each `point` contains a list to store the 3 floats.

How can I analyze these performance differences? How can I get more information 
on what happens for this code with PyPy?
_______________________________________________
pypy-dev mailing list
pypy-dev@python.org
https://mail.python.org/mailman/listinfo/pypy-dev

Reply via email to