garak serves as an automated vulnerability scanner for Large Language Models (LLMs), functioning similarly to a red team or penetration tester. It orchestrates a workflow where Probes generate malicious prompts to attack a target Generator, Buffs obfuscate these attacks, Detectors analyze the model's outputs for failures, and Evaluators grade the overall security posture.
garak is organized as connected concepts and components. Start broad, then drill down chapter by chapter.
garak serves as an automated vulnerability scanner for Large Language Models (LLMs), functioning similarly to a red team or penetration tester. It orchestrates a workflow where Probes generate malicious prompts to attack a target Generator, Buffs obfuscate these attacks, Detectors analyze the model's outputs for failures, and Evaluators grade the overall security posture.
Source Repository: https://github.com/NVIDIA/garak
Follow sequentially or jump to any topic. Start with Generators (Model Interfaces).
This tutorial was automatically generated by Code IQ and rendered with the shared tutorial site builder. It can be produced for any repository tutorial folder that follows the numbered markdown chapter layout.
View Code IQ ↗