Beware! Hackers using AI bot ‘ChatGPT’ to write malicious codes to steal your data

New Delhi: Artificial intelligence (AI)-powered ChatGPT, which provides human-like answers to questions, is also being used by cybercriminals to develop malicious tools that can steal data, report warns doing. The first example of cybercriminals using ChatGPT to create malicious code was discovered by researchers at Check Point Research (CPR). In underground hacking forums, threat actors create “infostealers” and cryptographic tools to facilitate fraudulent activities. Researchers have warned that interest in ChatGPT is growing rapidly as cybercriminals spread and teach their malicious activities.

ALSO READ | FACT CHECK: Does writing anything on banknotes invalidate them?

“Cybercriminals are attracted to ChatGPT. Over the past few weeks, we have seen evidence that hackers have started using ChatGPT to create malicious code. ChatGPT gives hackers a good starting point. This has the potential to speed up the hacker process,” said Sergey Shykevich. , Threat Intelligence Group Manager at Check Point.

Just as ChatGPT can be used to help developers code, it can also be used for malicious purposes. On December 29th, a thread named “ChatGPT – Benefits of Malware” appeared on the popular underground he hacking forum. The thread’s publisher revealed that they were using ChatGPT to attempt to replicate malware types and techniques described in research publications and malware articles in general.

READ ALSO | Bangalore man buys rare dog breed for Rs 20 crore

“While this individual may be a tech-minded threat actor, these posts demonstrate how less technically competent cybercriminals can utilize ChatGPT for malicious purposes that they can readily use. It seems to show with a real-world example,” the report said. On December 21st, a threat actor posted a Python script. He stressed that this was the first script he created.

When another cybercriminal commented that the style of the code was similar to OpenAI’s, the hacker said OpenAI had given him “a nice hand to finish the script to the right extent.” I have confirmed that This means that potential cybercriminals with little or no development skills can leverage ChatGPT to develop malicious tools and become full-fledged cybercriminals with technical competence. The report warns that it could mean

“The tools we analyze are very basic, but it’s only a matter of time before more sophisticated attackers step up their use of AI-based tools,” said Shykevich. ChatGPT developer OpenAI is reportedly looking to raise capital at a valuation of nearly $30 billion. Microsoft bought OpenAI for his $1 billion and now he’s pushing the ChatGPT application to solve real problems.