Hi, Inspired by Joachims initiative to run Coverity on parts of the Cryptech code base I've compiled a list of best practices in Tor Project which Cryptech could consider adopting. Some of the procedures and tools listed might already be in place or under way. The bullets regarding tests are examples of that.
This is a starting point, with focus on C code. Some of it can be applied on Python code and perhaps Verilog. More work is needed. In no particular order: - Code review Have at least two maintainers for every git repo, enforcing a second pair of eyes on code before it goes into any branch that ends up in a release. This can help with getting manual code review done but limiting review to being performed by maintainers only should be avoided. A variant of this is to require sign-off by at least N other developers knowledgeable in the code base at hand. Reviewing code can be hard. Guidelines for reviewing would be needed. - Using the compiler(s) Compiling C code with -Werror and other flags instructing the compiler to be more strict makes it harder to ignore possibly less optimal code constructs. Compiling C code with -fstack-protector-all and other flags making the compiler emit code for various types of "hardening" can help catching some types of errors both at compile time and run time. - More static code analysis Run static code analysis beyond what the compiler usually performs. Useful tools include Coverity, the clang static analyzer and runtime sanitizers. - Detecting memory leakage Running code under valgrind to find out if and where a program leaks memory can help in spotting memory handling errors. - Unit tests Write tests for functions (or sets of functions). Making sure that all code is being run can help ensuring functionality and also catch regressions. Using tools like gcov and lcov to measure test coverage can help with knowing what code is not being tested. - Integration tests Write tests for external and device internal interfaces. Testing the interfaces used for communicating with the device (e.g. RPC, CLI), interfaces between components on the device (e.g. ARM-FPGA, tamper MCU-MKM-FPGA, ATM-RTC) as well as command line interfaces to programs executed on a computer used for interacting with a device can help in similar ways as unit tests do. Fuzzing of such interfaces could be considered falling into this category of tests, see below for more on fuzzing. - Fuzzing of interfaces Make it easy to run fuzzers on those interfaces where it makes sense. This includes compiling and maintaining corpora for those interfaces. Useful fuzzers include AFL [0], libFuzzer [1] and OSS-Fuzz. [0] http://lcamtuf.coredump.cx/afl/ [1] http://llvm.org/docs/LibFuzzer.html [2] https://github.com/google/oss-fuzz - Document all interfaces Pick a method and tools for documenting all interfaces and make sure it gets done and being kept up to date. This goes for all kinds of interfaces -- API:s (including functions, global variables, types, structures), wire protocols, UI:s, more. _______________________________________________ Tech mailing list Tech@cryptech.is https://lists.cryptech.is/listinfo/tech