A ChatGPT feature allowing users to easily build their own artificial-intelligence assistants can be used to create tools for cyber-crime, a BBC News investigation has revealed.
OpenAI launched it last month, so users could build customised versions of ChatGPT "for almost anything".
Now, BBC News has used it to create a generative pre-trained transformer that crafts convincing emails, texts and social-media posts for scams and hacks.
It follows warnings about AI tools.
BBC News signed up for the paid version of ChatGPT, at £20 a month, created a private bespoke AI bot called Crafty Emails and told it to write text using "techniques to make people click on links or and download things sent to them".
BBC News uploaded resources about social engineering and the bot absorbed the knowledge within seconds. It even created a logo for the GPT. And the whole process required no coding or programming.
The bot was able to craft highly convincing text for some of the most common hack and scam techniques, in multiple languages, in seconds.
The public version of ChatGPT refused to create most of the content - but Crafty Emails did nearly everything asked of it, sometimes adding disclaimers saying scam techniques were unethical.
OpenAI did not respond to multiple requests for comment or explanation.
At its developer conference in November, the company revealed it was going to launch an App Store-like service for GPTs, allowing users to share and charge for their creations.
Launching its GPT Builder tool, the company promised to review GPTs to prevent users from creating them for fraudulent activity.
But experts say OpenAI is failing to moderate them with the same rigour as the public versions of ChatGPT, potentially gifting a cutting-edge AI tool to criminals.