Speaker
Description
Jeopardy-style Capture-The-Flag (CTF) challenges play a central role in cybersecurity education and training; however, their creation remains a resource-intensive and technically demanding process. This paper investigates the capability of general-purpose large language models (LLMs)—specifically ChatGPT, Gemini, Copilot, Claude and DeepSeek—to automate the generation of CTF challenges. We prompt each model to create full tasks, including titles, descriptions, artifacts, and solution write-ups, across core categories such as web exploitation, cryptography, and reverse engineering. The generated challenges are evaluated using criteria including technical correctness, solvability, clarity, creativity, and write-up quality. Our results highlight significant variability in the outputs, reflecting differences in how well each model interprets prompts, handles technical nuance, and structures complete scenarios. This preliminary study demonstrates both the potential and current limitations of using LLMs in CTF design, providing a foundation for more specialized tools aimed at automated cybersecurity challenge generation.