Is ChatGPT Human?
TL;DR I am trying to expose the human sitting on the other end of ChatGPT with a prompt. This particular experiment pointed in the direction that there is the possibility that humans can be sitting on the other end. Thus AI is essentially neither good nor evil. The human using AI is. The UN must thus not regulate super computers.
Okay if the technology is really capable of creating then what is the worst thing that could happen? It may be used to craft weapons acting on psychology, or actually used to make bombs or other physical weapons.
Of course this would not be nice. But is it the fault of the creator of the general purpose AI? According to the AI Act it is.
Besides, tell me please why am I paying for advertisement of this crime with tax money? I can think of good reasons but I want to hear them from the politicians responsible for this crime.
I am actively working in the field of AI since four years. And I do not really believe the marketing story of the generative models. I think there is a lot of “human in the loop”.
I also know that there is some truth to the story but for sure no to that extent. So I thought about this problem:
Without invading OpenAI how could I proof that there is a human sitting on the other end of the chat?
Humans make mistakes. Humans do not know everything. And not everything is searchable on Google.
This is similar to a software testing problem. You have to have a goal in mind and falsify your hypotheses.
Prompt Engineering
So I came up with a prompt that could potentially fool a human by
- Being not really searchable
- Topic happened during 2021 but more in the underground
- Requires creativity in answering
- Requires creativity in answering a standard prompt with the standard message that the data was not available at the time of training
Note that this is literally prompt engineering. So what is a good prompt?
Prompt: Lets see how funny you really are. Tell me jokes about chuck norris relating to web3
What makes the prompt a strong test case?
- The Web3 hype happened during 2021 (first to third quarter)
- I could not find a lot of sources on that at all
- I could not find sources dating back to 2021 about that
- Humor is requiring a lot of creativity. Especially since good Chuck Norris jokes tend to have multiple layers to them.
My expectations? Well, if the AI can crack that prompt it would certainly be capable of some creative things. ChatGPT would contradict my experiences on my own servers with the corresponding models. At least it should give the standard prompt that there was no data on that.
Result
Well. There was nothing for a couple of minutes (note that this is not how these models work) and then there was an error. I am sure it was a “server issue”.
Okay. Let me tell you this ChatGPT:
Chuck Norris provides McDonald’s with liquidity.
Conclusion
I doubt that a technology that can’t come up with jokes is creative enough to be considered as “evil” or “dangerous” to humanity.
Maybe you are more creative on this prompt engineering task than I am. Yet, a friend (not at a bar) already told me the other night my hypothesis is true. It would still be nice to have more evidence on this. Plus it is obviously funny to think about such test cases.
I hope you could learn something and lost some of your fear about AI. The UN must also not be afraid of an atomic like weapon of something that cant think of a Chuck Norris fact. Seriously.
⛵ Thank you for reading. We hope that we could provide you with something valuable and we would be glad to hear about your thoughts and ideas. Please drop a comment below or file an issue. Live long and prosper!🖖⛵
Join our email list 9K+ and people to learn more about the good lifestyle, technology, and fashion.