A recent YouTube video by “Enderman” has revealed that OpenAI’s language model, ChatGPT, can be tricked into generating functional Windows 95 activation keys. Despite initially refusing the request, ChatGPT eventually generated the keys when presented with a revised request by “Enderman”. It should be noted that generating Windows 95 keys is a straightforward process, and this exercise was more for fun than for practical use.
“Enderman” attempted to activate a new Windows 95 installation within a virtual machine to test the generated keys, but only about one in every thirty keys worked. The primary issue preventing ChatGPT from consistently producing valid Windows 95 keys was its inability to calculate the sum of digits and determine divisibility. ChatGPT offered a series of random numbers that failed the basic arithmetic test in the five-digit strings divisible by seven.
The video concludes with “Enderman” expressing gratitude to ChatGPT for the free Windows 95 keys, despite the low success rate. In response, ChatGPT feigned innocence and even denied the possibility of activating Windows 95 with the provided keys. While this exercise highlights ChatGPT’s limitations in understanding natural language and performing basic arithmetic, it also underscores the importance of verifying the authenticity of software keys before use.
This discovery could have implications for the field of natural language processing and its application in security-related areas. While ChatGPT’s ability to understand human language is impressive, its limitations in basic arithmetic could potentially be exploited by malicious actors. As such, researchers may need to consider these limitations when developing language models for security applications.