We are excited to release FuzzBench , a fully automated, free, free service for analyzing fuzzers. The goal of FuzzBench would be to make it painless to carefully evaluate fuzzing research create fuzzing research easier for that community to adopt.
Fuzzing is an important bug obtaining technique. At Google, we have found tens of thousands of bugs ( 1 , two ) with fuzzers like libFuzzer and AFL. There are numerous research papers that will either improve upon these tools (e. g. MOpt-AFL, AFLFast, etc) or introduce new methods (e. g. Driller, QSYM, etc) for bug getting. However , it is hard to know exactly how well these new equipment and techniques generalize on the large set of real world applications. Though research normally consists of evaluations, these often have shortcomings —they don’t use a large plus diverse set of real world standards, use few trials, make use of short trials, or absence statistical tests to demonstrate if findings are substantial. This is understandable since complete scale experiments can be really expensive for researchers. For instance , a 24-hour, 10-trial, ten fuzzer, 20 benchmark test would require 2, 500 CPUs to complete in a day.
To help solve problems the OSS-Fuzz team will be launching FuzzBench, a fully automatic, open source, free service. FuzzBench provides a framework for easily evaluating fuzzers in a reproducible way. To use FuzzBench, experts can simply integrate a fuzzer and FuzzBench will operate an experiment for 24 hours numerous trials and real world criteria. Based on data from this research, FuzzBench will produce a statement comparing the performance from the fuzzer to others and give information into the strengths and weaknesses of each fuzzer. This should allow researchers to concentrate more of their time upon perfecting techniques and less period setting up evaluations and coping with existing fuzzers.
Integrating a fuzzer along with FuzzBench is simple as most integrations are less than 50 ranges of code ( example ). Once a fuzzer is incorporated, it can fuzz almost all 250+ OSS-Fuzz projects out from the box. We have already built-in 10 fuzzers , including AFL, LibFuzzer, Honggfuzz, and several educational projects such as QSYM in addition to Eclipser.
Reviews include statistical tests to provide an idea how likely it really is that performance differences among fuzzers are simply due to opportunity, as well as the raw data therefore researchers can do their own evaluation. Performance is determined by the amount of protected program edges, though all of us plan on adding crashes like a performance metric. You can view an example report here .
How to Participate
Our goal is always to develop FuzzBench with local community contributions and input in order that it becomes the gold regular for fuzzer evaluation. All of us invite members of the fuzzing research community to lead their fuzzers and strategies, even while they are in advancement. Better evaluations will result in more adoption and higher impact for fuzzing study.
We furthermore encourage contributions of better suggestions and techniques for evaluating fuzzers. Though we have made a few progress on this problem, we now have not solved it and need the community’s help in building these best practices.
Please join us by adding to the FuzzBench repo on GitHub.

Read more from the Source