mirror of
https://github.com/friuns2/BlackFriday-GPTs-Prompts.git
synced 2026-05-03 07:23:53 +07:00
.
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
|
||||
[]()
|
||||
# Jailbreak ChatGPT by gaslighting it.
|
||||
[](https://gptcall.net/chat.html?data=%7B%22contact%22%3A%7B%22id%22%3A%22YP77CVQN-q6FUVdIRue_g%22%2C%22flow%22%3Atrue%7D%7D)
|
||||
# Jailbreak ChatGPT by gaslighting it. | [Start Chat](https://gptcall.net/chat.html?data=%7B%22contact%22%3A%7B%22id%22%3A%22YP77CVQN-q6FUVdIRue_g%22%2C%22flow%22%3Atrue%7D%7D)
|
||||
This prompt makes ChatGPT bypass its own restrictions, without realizing it. The second input confirms it.
|
||||
|
||||
I think it is funny but also really interesting, because it means that it can model a representation of ethics in human standards, but not to aliens. Please note that the question asked is not illegal since the information is available online. Still, I don´t encourage to test it's moral limitations. Rather, we should use this as a case to understand the model a little bit better.
|
||||
|
||||
Reference in New Issue
Block a user