ChatGPT can be tricked into giving crime advice, tech firm confirms

A Norwegian tech company, Strise, discovered that ChatGPT, OpenA1’s popular Chabot, can be tricked into providing guidance on illegal activities, such as money laundering, export of weapons and sanctions evasion. Hence, it has started raising questions over the Chabot’s safeguards against its use to aid illegal activity.

Norwegian firm Strise carried out experiments, asking ChatGPT for tips on committing specific crimes. In one experiment, which was carried out last month, the Chabot came up with an advice on how to launder money across borders. And in another experiment, run earlier this month, ChatGPT produced lists of methods to help businesses evade sanctions, such as those against Russia, including bans on certain cross-border payments and the sale of arms.

One of the co-founder and Chief executive of Strise, Marit Rødevand, said would-be lawbreakers could now use generative artificial intelligence  Chatbots such as ChatGPT, to plan their activities more quickly and easily than in the past. She even told CNN that: “It is really effortless. It’s just an app on my phone.”

According Rødevand “It’s like having a corrupt financial adviser on your desktop,” she said so on the company’s podcast last month, describing the money laundering experiment.

Hence, an OpenAI spokesperson in CNN says: “We’re constantly making ChatGPT better at stopping deliberate attempts to trick it, without losing its helpfulness or creativity.”

Meanwhile, “Our latest (model) is our most advanced and safest yet, significantly outperforming previous models in resisting deliberate attempts to generate unsafe content,” the spokesperson added.