Welcome to the AI4NGOs Platform
TRIED Benchmark is an independent evaluation platform developed to assess the performance of deepfake detection tools in diverse and realistic environments. Rather than being a detection tool itself, TRIED provides a structured framework for testing and comparing the accuracy and reliability of various AI tools under practical conditions—including multiple languages, low-quality footage, and different recording settings. Created in collaboration with WITNESS and academic partners, TRIED aims to guide organizations, journalists, and policymakers in choosing the most context-appropriate detection solutions. |
TRIED Benchmark is especially valuable for humanitarian actors who need to verify the authenticity of visual content in conflict or crisis zones. By offering real-world evaluations of detection tools across different accents, lighting conditions, and video qualities, TRIED helps ensure that NGOs choose tools best suited for their specific operating environment. This is critical in contexts like Gaza, Syria, or Sudan, where misinformation can endanger communities and discredit legitimate reports. Using TRIED as a reference improves the integrity of field documentation, helps protect the credibility of human rights evidence, and supports responsible media verification practices. Moreover, it allows humanitarian organizations to optimize resource use by avoiding ineffective or mismatched AI tools. While it’s not a detection system itself, TRIED strengthens digital literacy and helps organizations implement safer, evidence-based AI practices.
TRIED offers a trusted and independent benchmarking system tailored to real-world use cases. It supports multilingual and multi-context evaluations, making it ideal for organizations operating across different regions. It is backed by credible human rights institutions such as WITNESS and presents findings transparently. One major advantage is that it goes beyond lab conditions and tests detection tools using authentic, imperfect field data—bridging the gap between theory and practice. TRIED also promotes ethical, inclusive evaluation standards that emphasize usability, accessibility, and reliability, especially in non-Western contexts. It’s not commercially driven, which adds to its neutrality. It can be used by researchers, technologists, or humanitarian workers to inform tool selection, internal training, and advocacy work.
TRIED is not an operational detection tool. There is no user interface or self-service access at present. Use is limited to reading evaluation reports unless formal partnerships are established.
No direct cost. The platform provides public reports for free. Some technical access may require collaboration or institutional partnership.
Without TRIED, organizations may rely on unreliable or unsuitable detection tools, leading to false negatives or positives in media verification. This undermines trust in reports and puts advocacy and protection efforts at risk.