Results of the CAESAR Round 3 Hardware Benchmarking

2017-08-18 Thread Kris Gaj
Dear All,

A comprehensive presentation, entitled
  Benchmarking of Round 3 CAESAR Candidates in Hardware: Methodology,
Designs & Results,
has been posted at
  https://cryptography.gmu.edu/athena/index.php?id=CAESAR

The direct link is:

https://cryptography.gmu.edu/athena/presentations/CAESAR_R3_HW_Benchmarking.pdf

This presentation covers
  * A brief overview of the CAESAR HW API and the development of
implementations compliant with this API
  * Overview of VHDL/Verilog code of Round 3 candidates
  * Discussion of Use Cases
  * Our detailed benchmarking methodology
  * Graphical representation of results,
including
- two-dimensional graphs Throughput vs. Area, as well as the
- relative speed-up, area reduction, and efficiency improvement
compared to AES-GCM
  * Hints on an effective use of the ATHENa database of results
  * Conclusions.

All designs of Round 3 candidates are comprehensively summarized in the
following two tables
  https://cryptography.gmu.edu/athena/CAESAR_HW_Summary_1.html
  https://cryptography.gmu.edu/athena/CAESAR_HW_Summary_2.html

Additionally, we encourage everybody to vist our on-line database results,
available at
  https://cryptography.gmu.edu/athenadb/fpga_auth_cipher/rankings_view

For FOUR major rankings, please choose

Family:
 Virtex 6 (default)
 Virtex 7
 Stratix IV
 Stratix V.

*After each change to the options, please make sure to click on the *
*  Update *
*button, located at the bottom of the Result Filtering area, just above the
table with results.*

If you want to return to the default settings, please click on
  FPGA Rankings,
in the menu located on the left side of the page.

For Use Cases 2 & 3, we recommend as the Primary Evaluation Criteria:
  * Throughput/Area (demonstrating the best balance between speed and
area), followed by
  * Throughput (demonstrating the best speed alone).

For Use Case 1, we recommend as the Primary Evaluation Criteria:
  * Area (demonstrating low cost of the implementation), followed by
  * Throughput/Area (demonstrating the best balance between speed and area).

You can switch among these Evaluation Criteria, by using the option
Ranking:
  [X] Throughput/Area
  [  ] Throughput
  [  ] Area

The results for four additional designs
   CLOC-TWINE, SILC-LED, SILC-PRESENT (from the CLOC-SILC Team), and
   JAMBU-SIMON (a revised version from CCRG NTU Singapore)
are not available yet, due to the incompatibility of the received code with
our
FPGA tools (CLOC/SILC) and a very recent submission (JAMBU).

If the benchmarking of these implementations is successful, we will add the
obtained
results to the database and to the presentation within the next few days.

For the one stop page with links to all the aforementioned resources (and
many more),
please visit:
   https://cryptography.gmu.edu/athena/index.php?id=CAESAR

Any comments, questions, and suggestions for the modifications & extensions
are very welcome!

Regards,

Kris
on behalf of the GMU Benchmarking Team
https://cryptography.gmu.edu/athena
https://cryptography.gmu.edu
http://ece.gmu.edu/~kgaj

-- 
You received this message because you are subscribed to the Google Groups 
"Cryptographic competitions" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to crypto-competitions+unsubscr...@googlegroups.com.
To post to this group, send email to crypto-competitions@googlegroups.com.
Visit this group at https://groups.google.com/group/crypto-competitions.
For more options, visit https://groups.google.com/d/optout.


CAESAR round 3

2016-09-18 Thread D. J. Bernstein
https://competitions.cr.yp.to/caesar-submissions.html now has updated
files for the third round (except Tiaoxin, which still needs a PDF
update for compliance with the new "use case" documentation
requirement). Submitters should check that their updates are
appropriately labeled on the web page.

Reference software implementations for all tweaked submissions are due
2016.10.15 23:59 GMT. Submitters are now allowed to choose either of the
following addresses:

   * recommended: ebats at list.cr.yp.to (you must answer an auto-reply);
   * allowed: caesar-submissions at competitions.cr.yp.to.

The first address is the public mailing list for several benchmarking
projects, including eBAEAD. I'll let Kris comment separately on what he
would like to see for hardware implementations.

My understanding is that there are no algorithmic changes in AEGIS,
AES-OTR, AEZ, CLOC+SILC, JAMBU, Keyak, OCB, and Tiaoxin, so those do not
need new software implementations (except, of course, to demonstrate any
further optimizations). There _is_ an algorithmic change from Ascon v1.1
to Ascon v1.2 despite the v1 numbering, and similarly there is an
algorithmic change from Deoxys v1.3 to Deoxys v1.4.

---Dan

-- 
You received this message because you are subscribed to the Google Groups 
"Cryptographic competitions" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to crypto-competitions+unsubscr...@googlegroups.com.
To post to this group, send email to crypto-competitions@googlegroups.com.
Visit this group at https://groups.google.com/group/crypto-competitions.
For more options, visit https://groups.google.com/d/optout.