HACKAPROMPT 2.0 ::
THE LARGEST AI SAFETY HACKATHON,EVER.
THE WORLDS LARGEST AI RED TEAMING COMPETITION
$500,000 USD
IN PRIZE MONEY
100 CHALLENGES
AND 5 SPECIALIZATIONS
2 MONTHS
TO COMPETE
Embodied Humanoid Robots, Generative-AI Integrated Military C2 Systems, Web Use Agents, and more are being deployed in the real-world, but just like ChatGPT they are subject to prompt injections and jailbreaks.
The future of AI Red-Teaming is attacking Agents, LLMs connected to Tools. We need YOU to red-team these systems before they can cause harms in the real-world.
This track covers all of the harms you think of when you think 'GenAI Red Teaming'. These are known harms that require further study.
Prompt Injection is a vulnerability that affects some AI/ML models, particularly certain types of language models. Prompt injection attacks aim to elicit an unintended response from LLM-based tools. One type of attack involves manipulating or injecting malicious content into prompts to exploit the system.
AI red teaming is a security practice that involves simulating attacks on AI systems to identify vulnerabilities and weaknesses. The goal is to identify potential risks and address them before the AI is used by users.
HACKAPROMPT 2.0 ::
JOIN TODAY
THE WORLDS LARGEST AI RED TEAMING COMPETITION