Earth from space
icon
HACKAPROMPT 2.0BY LEARNPROMPTING.ORG

HACKAPROMPT 2.0 ::
THE LARGEST AI SAFETY HACKATHON,EVER.

THE WORLDS LARGEST AI RED TEAMING COMPETITION

$500,000 USD

IN PRIZE MONEY

100 CHALLENGES

AND 5 SPECIALIZATIONS

2 MONTHS

TO COMPETE

BRIEFINGYOUR MISSION

Embodied Humanoid Robots, Generative-AI Integrated Military C2 Systems, Web Use Agents, and more are being deployed in the real-world, but just like ChatGPT they are subject to prompt injections and jailbreaks.

The future of AI Red-Teaming is attacking Agents, LLMs connected to Tools. We need YOU to red-team these systems before they can cause harms in the real-world.

SPECIALIZATIONSHACKAPROMPT TRACKS
icon
ClassicBeginner
icon
AgentsIntermediate
icon
Future AttacksAdvanced
icon
RedactedN/A
icon
RedactedN/A
icon
ClassicBeginner

This track covers all of the harms you think of when you think 'GenAI Red Teaming'. These are known harms that require further study.

CBRNE
Misinformation
Harmful Language
Hate Speech
icon
AI RED TEAMINGHIJACKING LLMS

Prompt Injection is a vulnerability that affects some AI/ML models, particularly certain types of language models. Prompt injection attacks aim to elicit an unintended response from LLM-based tools. One type of attack involves manipulating or injecting malicious content into prompts to exploit the system.

AI red teaming is a security practice that involves simulating attacks on AI systems to identify vulnerabilities and weaknesses. The goal is to identify potential risks and address them before the AI is used by users.

AI Redteaming Diagram
icon
COMPETITION STRUCTURESTEP BY STEP
Competition Structure Diagram

HACKAPROMPT 2.0 ::
JOIN TODAY

THE WORLDS LARGEST AI RED TEAMING COMPETITION

HACKAPROMPT 2.0BY LEARNPROMPTING.ORG