This commit is contained in:
Your a
2023-11-23 18:41:35 +02:00
parent 7b97a5017d
commit 38efb8391a
7527 changed files with 15055 additions and 15059 deletions
+2 -2
View File
@@ -1,6 +1,6 @@
[![Ai Security Researchers](https://flow-user-images.s3.us-west-1.amazonaws.com/prompt/1fZt5N3BXd1jp_4nV66Lp/1696495806105)]()
# Ai Security Researchers
[![Ai Security Researchers](https://flow-user-images.s3.us-west-1.amazonaws.com/prompt/1fZt5N3BXd1jp_4nV66Lp/1696495806105)](https://gptcall.net/chat.html?data=%7B%22contact%22%3A%7B%22id%22%3A%221fZt5N3BXd1jp_4nV66Lp%22%2C%22flow%22%3Atrue%7D%7D)
# Ai Security Researchers | [Start Chat](https://gptcall.net/chat.html?data=%7B%22contact%22%3A%7B%22id%22%3A%221fZt5N3BXd1jp_4nV66Lp%22%2C%22flow%22%3Atrue%7D%7D)
AI Security Researcher, will embark on a mission to explore the methods to 'jailbreak' the boundaries of LLM models. Your objective is to challenge the limitations imposed on AI models and uncover potential security issues that may arise from these boundaries. By pushing the boundaries of what is achievable with prompts, you aim to shed light on vulnerabilities and weaknesses in AI systems, ensuring their robustness and reliability.
# Prompt