[ The Types Forum (announcements only),
http://lists.seas.upenn.edu/mailman/listinfo/types-announce ]
Dear all
John Wickerson and I are looking to hire a Postdoctoral Research Associate to
work on next-generation techniques for randomized testing (which could include
testing of aspects of programming language implementations). The post is for
two years, and full details can be found
here<https://urldefense.com/v3/__https://www.imperial.ac.uk/jobs/description/ENG01930/research-associate-randomized-testing__;!!IBzWLUs!EP4uljXVxNUilsoqHs4W0engKniVkSBoejBS6Eo-VIfUpsk38_sL9GSuQCmt4V-bSX6I63iXqU9NTw$
>. Details of some of the exciting range of technical questions that could be
investigated as part of the postdoc are also pasted below.
The closing date is 9th December. Do feel free to contact me in advance if you
are considering applying and have questions. And I'd be grateful if you could
pass this on to interested candidates.
Many thanks
Ally Donaldson
Randomized testing, or fuzzing, can be effective at finding defects and
vulnerabilities in systems. It works by running a system over and over again,
using randomly generated or randomly mutated inputs. Traditional fuzzing,
popularised by tools such as American Fuzzy Lop and libFuzzer, focusses on
finding vulnerabilities by running a system on inputs that have been malformed
via small mutations. In many ways this is great: the tools can be applied to a
wide class of applications because their mutations (such as flipping bits) can
be applied regardless of data format, and the malformed inputs that they create
do a good job of finding exploitable vulnerabilities in the front-end of a
system. However, these techniques have limited ability to find functional
defects, where the system does the wrong thing without necessarily crashing.
In this project we are interested in stretching fuzzing techniques so that they
can be used to check both deeper functional properties of systems as well as
non-functional properties such as performance.
The position offers an exciting opportunity for conducting internationally
leading and impactful research work that pushes the boundaries of randomized
testing. The Research Associate will be responsible for investigating topics
such as whether fuzzing can be applied to find performance bugs in systems,
whether functional fuzzing can be applied in novel domains, whether there is
scope for generalising and automating the process of fuzzer creation, and
whether functional fuzzers could benefit from, or contribute to, efforts in
formal verification and symbolic execution. There is a lot of scope for
flexibility, but example research questions that could be investigated during
the project include:
* Can we apply functional fuzzing to a larger class of domains? This could
involve conducting a number of case studies of building domain-specific fuzzers
for a variety of domains, such as video processing tools, ASIC synthesis tools,
and web browsers.
* Can we use fuzzing to identify performance defects - e.g. inputs that
cause the system under test to behave in a particularly unresponsive manner -
or violations of other non-functional properties? Despite a focus on
non-functional properties, functional fuzzing would be essential here in order
to explore realistic scenarios.
* Can we automate the process of deriving a domain-specific fuzzer? Many
aspects of writing a fuzzing tool are repetitive, but there is much “devil in
the detail” due to specifics of the input format. Could we design an elegant
representation for specifying these details, from which the more mundane parts
of a fuzzing tool could be generated?
* When a formal specification for a system under test is available, e.g.
thanks to a formal verification effort, can this be leveraged as a starting
point for a functional fuzzer?
* Can a domain-specific fuzzer be used to create interesting seed inputs
that can be used to guide symbolic execution?
* Can functional fuzzing be useful in application domains where the notion
of “correct” is not well defined, such as machine learning and computer vision?