Hallo,

I've compared three ghdl setups

 - ghdl 0.33    with llvm backend  (from Pete Gavins .deb package)
 - ghdl 0.33    with  gcc backend  (compiled from SourceForge source kit)
 - ghdl 0.34dev with  gcc backend  (compiled from git master)

with two benchmark cases
 - behavioral simulation  (766 processes, many quite complex)
 - post-synthesis functional simulation (6090 processes, mostly quite slim)
   the code was generated by vivado with 'write_vhdl'

and I tried different optimize levels

 -O0   (is default for gcc!)
 -Og   (does some optimizations which don't interfere with debugging)
 -O1
 -O2   (is default for llvm)

For the behavioral simulation I got:

                            compile time           simulation time
                                 real/     user         real/     user/     sys
      ghdl-0.33-llvm -O0    0m10.910s/0m09.635s    1m48.917s/1m49.576s/0m0.686s
      ghdl-0.33-llvm -O2    0m18.319s/0m17.164s    1m17.047s/1m15.265s/0m0.720s

      ghdl-0.33-gcc -O0     0m19.614s/0m17.161s    1m15.832s/1m16.470s/0m0.676s
      ghdl-0.33-gcc -Og     0m23.892s/0m21.514s    0m51.699s/0m52.320s/0m0.669s
      ghdl-0.33-gcc -O2     0m43.271s/0m40.674s    0m51.209s/0m49.033s/0m0.624s
      ghdl-0.33-gcc -O3     0m57.981s/0m55.133s    0m47.617s/0m48.107s/0m0.696s

      ghdl-0.34dev-gcc -O0  0m19.777s/0m16.965s    1m00.541s/1m01.267s/0m0.587s
      ghdl-0.34dev-gcc -Og  0m23.839s/0m20.713s    0m44.975s/0m45.450s/0m0.805s
      ghdl-0.34dev-gcc -O1  0m28.972s/0m26.259s    0m42.085s/0m42.782s/0m0.601s
      ghdl-0.34dev-gcc -O2  0m40.997s/0m38.160s    0m42.661s/0m43.340s/0m0.618s
      ghdl-0.34dev-gcc -O3  0m53.743s/0m50.605s    0m42.550s/0m43.156s/0m0.684s

For the post-synthesis functional simulation I got (the UNISIM lib was
compiled with -O2):

                            compile time           simulation time
                                 real/     user         real/     user/     sys
      ghdl-0.34dev-gcc -O0  0m31.871s/0m30.831s    0m58.664s/0m58.636s/0m0.055s
      ghdl-0.34dev-gcc -Og  0m38.098s/0m37.066s    0m58.109s/0m58.078s/0m0.063s
      ghdl-0.34dev-gcc -O1  0m45.404s/0m44.433s    0m58.373s/0m58.330s/0m0.058s
      ghdl-0.34dev-gcc -O2  1m06.171s/1m04.976s    0m58.541s/0m58.514s/0m0.035s

  and finally the dependence on UNISIM library optimization level (the model
  was always compiled with -Og):

                            compile time           simulation time
                                 real/     user         real/     user/     sys
                UNISIM -O0  0m38.079s/0m37.075s    1m11.720s/1m11.679s/0m0.050s
                UNISIM -O1  0m38.095s/0m37.198s    0m58.729s/0m58.692s/0m0.050s
                UNISIM -O2  0m38.153s/0m37.152s    0m58.451s/0m58.431s/0m0.034s
                UNISIM -O3  0m38.665s/0m37.625s    0m58.388s/0m58.358s/0m0.057s

From this, certainly very slim base, I observe

  1. gcc -O0 and llvm -O2 give similar compile and execution speed
  2. gcc -O2 gives higher compile time, but also better execution speed
  3. gcc -O2 clearly outperforms llvm -O2
  4. gcc -Og seems a good compromise between compile and execution speed
  5. for gate level models impact of optimization level is quite low

Is that in line with what others have seen ?


        With best regards,       Walter


_______________________________________________
Ghdl-discuss mailing list
Ghdl-discuss@gna.org
https://mail.gna.org/listinfo/ghdl-discuss

Reply via email to