An open-source, configuration-driven toolkit for audio deepfake detectors
DeepFense is built upon key concepts that enable flexible and powerful detection pipelines.
Decoupled architecture allows you to swap Wav2Vec2 for WavLM or AASIST for MLP with a single line of config.
All hyperparameters, augmentation pipelines, and model architectures are defined in simple YAML files.
Built-in pipelines for RawBoost, RIR reverb, Codec simulation, Morph, AdditiveNoise, SpeedPerturb, AddBabble, DropFreq, DropChunk, and more to robustify your models.
Automatic tracking of EER, minDCF, and F1-score with integrated WandB logging and checkpointing.
Robust handling of variable-length audio with smart padding, unified dataset construction, and efficient collation.
Unified reporting protocols ensure every experiment is tractable, comparable, and fully reproducible.
Explore DeepFense's specialized components for building detection systems.
Accelerate your research with our open-source model zoo and datasets scripts.
Access state-of-the-art checkpoints for WavLM, EAT, and AASIST. Ready for inference or fine-tuning on your own data.
We provide cleaned and aligned versions of major audio deepfake benchmarks (ASVspoof 2019/2021, In-the-Wild) released as prepared Parquet files for immediate use.
The Parquet files contain standardized metadata and file references rather than wav files. For reproducibility, we also provide scripts to automatically download the original datasets (wav files and labels) and construct the Parquet files.
pip install deepfense==0.1
git clone https://github.com/Yaselley/deepfense-framework
cd deepfense-framework
pip install -e .
Want to learn more? Check out our step-by-step tutorials and pre-configured recipes.
DeepFense is an open-source project. We welcome contributions to add new models, datasets, and improvements.
Contribute on GitHub