A group of tech entrepreneurs funded OpenAI in 2015 but it was not until Q4 of 2022 that ChatGPT (GPT (Generative Pretrained Transformer) language model became a worldwide known phenomenon with more than 100 million users.
This is due to the usefulness of this AI-powered tool, capable of answering complex and detailed questions about almost anything.
Precision, however, is not the strong suit of ChatGPT yet.
As you can see from this sample chat, if the AI is posed with tricky questions, especially those that include possible infringement of rules, it will give relatively useful answers or conservative answers in order not to ever recommend debatable actions, or in some cases, it will simply not be able to provide an answer.
So, there is no imminent threat of AI overtaking the world by hacking or allowing anyone who wants to hack into systems to do so from scratch.
Still, the usefulness of this tool is a proven fact just like its limitations, and although it cannot make a hacker out of anyone, this tool can provide aid to malicious actors in several ways.
Despite no official report of AI-based attacks, there are several rumours about recent AI-assisted activities with malicious intent online.
In this article, we will explore in what ways hackers are using ChatGPT to further illicit schemes and improve their offensive strategies.
The hacking strategies that ChatGPT improved
There are several ways to make use of AI for a very simple reason: it helps you gather faster and more accurate knowledge than you would spend more time gathering on your own.
This very simple aspect applies to all sorts of criminal activity, not just hacking. It is also worth noting that OpenAI is taking a massive effort into limiting the scope of ChatGPT answers to licit activities. As of today, you cannot simply ask ChatGPT “how do I hack NSA servers” and get a bullet point step-by-step action list.
With that said, the AI chat model is only capable to predict the goal of the question based on the question itself, which means that there are still several ways that one would be able to make use of its aid to further an illicit behaviour.
These many options can be summarised into 3 main categories:
ChatGPT can be asked to write directly content without specifying its purpose, only its features. Its ability to write lines of code or human-like text following instructions turns it into a very effective tool to accelerate the creation of malware and phishing tools.
In these cases, using the famous AI leads to direct results that are immediately employable for malicious use. As dangerous as it may sound, the fact that a directly illicit intent is easier to spot for the language processing model makes it also easier for ChatGPT to avoid providing aid to malicious actors.
However, there is only so much that OpenAI can do to train its creation in spotting its community members’ intentions, especially as more subtle malicious uses are basically impossible to spot even for humans.
The information war has captivated as well a lot of people’s attention during 2022. As fake news or the fear of fake news spread, more and more people take an interest in the impact of information shocks in the public debate and the importance of having reliable sources.
Some reports have been made of malicious actors using ChatGPT-generated content, similar to essays and articles with the sole purpose of spreading fake news, creating panic and distrust.
It is the responsibility of the entire community to be vigilant and report any suspicious activities involving ChatGPT scams. OpenAI is committed to promoting the ethical and responsible use of its technology, and will continue to work towards improving the ability of ChatGPT to detect and prevent malicious intent.
OSINT and information scraping are other critical aspect that hackers take into account when preparing for a malicious attack. ChatGPT does a great job at pulling together a lot of information from a huge catalogue that a human would never be able to go through on its own.
The language will tell you, shall you ask, that its information ranges from 2000 to 2021 approximately. This means Billions of entries feed the AI and those entries can be quickly filtered with a question.
As gaining knowledge on targets can determine a huge advantage for an attacker against is a victim on the web, the help provided in this sense by the AI is crucial.
In this case also, however, it must be noted that OpenAI has taken efforts in ensuring that personal information is not disclosed during the regular use of ChatGPT.
How to ensure your information is safe?
ChatGPT might sound like a game changer, when in fact, it only changed the game’s pace. The cybersecurity rules and best practices that were enforced until the day before its release are more than ever valid and useful.
The fact that ChatGPT data only dates until 2021 it’s a huge factor. If you worry about possible uses that can be made of information related to you, or if you know of someone who accessed your information through ChatGPT it is only a matter of rendering outdated information.
Changing your mail or phone number today requires negligible investments in time and money. For other information, such as publicly available sources (journal articles and reports), that you wish ChatGPT or any other web scraper would not access, you can ask them to be removed from the website owner. In case you own a business and you do not want your data online to be accessible to automated programs you should enforce authentication on your website or make use of the so-called “robots.txt” files, to exclude bots and search engines from content you have published on the web.
Last but not least, fight fire with fire. If you are concerned that anyone might use ChatGPT against you by scraping your info, use the tool yourself and test how much of your personal information is exposed by it. From within ChatGPT you can report and give feedback on the answers provided. This allows you to immediately flag content that could be directly harmful to you and the issue will be taken care of by OpenAI Team.
ChatGPT is in charge to vet for its own users’ intentions, which sounds like putting a child in charge of deciding whether or not he or she is telling the truth.
In fact, despite its remarkable potential, AI still struggles at producing content that can only be used for legitimate purposes, and it can provide aid to malicious actors.
With that said, you are not defenceless in this process. Besides the continuous efforts made by OpenAI to improve the tool and respond to its users concerns, you can improve privacy for yourself and your business in the areas that were still lacking by following the best practices suggested in this article
ChatGPT cannot give answers that are evidently unethical or debatable on moral standards. It also struggles with absurd or abstract questions and it tries to validate its own text as much as possible, often being self-referential even in short texts. Try to spot these patterns in the content you are reading as well as clear errors of common sense.
You can use Plagiarism and AI Checkers available online to actively prevent excessive use of AI written content or demand updated content (2022 and later) that ensures that CahtGPT had limited contribution over the text.
All the parties involved in the misuse, including ChatGPT Support Team, should be alerted, with priority to law enforcement in your country, if the misuse you spotted resulted in a criminal act.
Ethical guidelines could require that ChatGPT-created content is only used after being double-checked by humans and with heavy referencing to ensure the validity and accuracy of the content. Also, you could establish a threshold of words or topic areas exclusion to be implemented.