The Paradyn project has two new technical report in the area binary tools and
performance evaluation.

Our full list of project publications always can be found at:
 http://www.paradyn.org/html/publications-by-year.html

Comments and feedback on our papers is always welcome!

------------------------------------------------------------------------------
"Exposing Hidden Performance Opportunities in High Performance GPU
Applications", Benjamin Welton and Barton P. Miller.
Submitted for publication, May 2017.
ftp://ftp.cs.wisc.edu/paradyn/papers/welton-unobvious.pdf

Abstract:
   The emergence of leadership class systems with nodes containing many-core
   accelerators, such as GPUs, has the potential to vastly increase the
   performance of distributed applications. Exploiting the additional
   parallelism that manycore accelerators offer is fraught with challenges.
   Developers and existing performance tools focus on a subset of these
   challenges, primarily the identification of CPU code that may be suited
   for many-core parallelization and improving the efficiency of existing
   many-core code. While this focus has resulted in application performance
   improvements, a significant amount of untapped performance still remains.
   Untapped performance opportunities take the form of missed unobvious
   many-core parallelization opportunities as well as inefficiencies in
   handling interactions with the accelerator, such as memory copies and
   synchronization. In this work we address three issues: (1) characterize
   the missed performance opportunities in many-core applications that are
   not detected by current performance tools and techniques, (2) design
   detection methods that can be used by performance tools to identify these
   missed opportunities, and (3) apply these techniques to five large scale
   scientific applications (Qball, QBox, Hoomd-blue, LAMMPs, and cuIBM),
   resulting in a reduction of their execution time by 18% and 87%.
------------------------------------------------------------------------------
"Structured Random Differential Testing of Instruction Decoders", Nathan Jay
and Barton P. Miller, submitted for publication, May 2017.
ftp://ftp.cs.wisc.edu/paradyn/papers/JayFleece.pdf

Abstract:
   Decoding binary executable files is a critical facility for software
   analysis, including debugging, performance monitoring, malware detection,
   cyber forensics, and sandboxing, among other techniques. As a foundational
   capability, binary decoding must be consistently correct for the techniques
   that rely on it to be viable. Unfortunately, modern instruction sets are
   huge and the encodings are complex, so as a result, modern binary decoders
   are buggy. In this paper, we present a testing methodology that
   automatically infers structural information for an instruction set and uses
   the inferred structure to efficiently generate structured-random test cases
   independent of the instruction set being tested. Our testing methodology
   includes automatic output verification using differential analysis and
   reassembly to generate error reports. This testing methodology requires
   little instruction-set-specific knowledge, allowing rapid testing of
   decoders for new architectures and extensions to existing ones. We have
   implemented our testing procedure in a tool name Fleece and used it to test
   multiple binary decoders (Intel XED, libopcodes, LLVM, Dyninst and Capstone)
   on multiple architectures (x86, ARM and PowerPC). Our testing efficiently
   covered thousands of instruction format variations for each instruction set
   and uncovered decoding bugs in every decoder we tested.
_______________________________________________
Dyninst-api mailing list
Dyninst-api@cs.wisc.edu
https://lists.cs.wisc.edu/mailman/listinfo/dyninst-api

Reply via email to