AutoPatchBench is a new benchmark that tests how well AI tools can fix code bugs. It focuses on C and C++ vulnerabilities found through fuzzing. The benchmark includes 136 real bugs and their verified fixes, taken from the ARVO dataset. Patch generation flowchart CyberSecEval 4 AutoPatchBench is part of Meta’s CyberSecEval 4, a benchmark designed to objectively evaluate and compare various LLM-based auto-patching agents for vulnerabilities specifically identified via fuzzing, a widely used method of … More →
The post AutoPatchBench: Meta’s new way to test AI bug fixing tools appeared first on Help Net Security.
http://news.poseidon-us.com/TKv3H8Like this:
Like Loading...
Related