LONDON — “Deepfakes,” or doctored video or audio clips that appear to be real despite being altered or manipulated, are the biggest artificial intelligence-based crime threat in the world. That’s according to a study by researchers at University College London that ranks 20 different ways AI can be used for criminal purposes over the next 15 years.
Each threat was ranked based on the potential harm it could cause, its potential for monetary gain, how easily it can be used, and how difficult it would be to prevent.
Imagine a video spreads on social media showing President Trump announce he’s officially declaring war on another country. The video looks and sounds legitimate, but in reality it is not; it’s a deepfake. It isn’t hard to imagine the fear and uncertainty a video like that would produce within minutes of showing up on social media. Even if the White House sent out an official press release denouncing the video within 15 minutes of the deepfake trending on Twitter, much of the damage will have already been done.
Because they are so convincing, deepfakes are going to be very hard to preemptively find and stop, researchers warn. Moreover, deepfakes can be used for just about anything; political gain, extortion, impersonation, etc.
The authors even predict that deepfakes may become so common that at some point people aren’t going to know what to believe or trust anymore. When you can’t trust your own eyes, things are going to get complicated.
Deepfakes aren’t only threat of great concern
“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives,” comments Professor Lewis Griffin, senior author of the study, in a release.
Aside from deepfakes, researchers say five other AI threats on their list are of high concern: the use of driverless vehicles as weapons; more sophisticated and convincing phishing techniques; the disruption of AI-controlled systems; large-scale collection of personal data for blackmailing; and AI-written fake news.
Researchers formulated their list of new threats by analyzing relevant academic papers, news reports, and even fictional sources. Then, a comprehensive discussion was held with 31 AI experts over two days, in order to get a better sense of each threat’s severity. Those experts come from a variety of sources, such as academia, the government, state security agencies, the private sector, and the police.
Other artificial intelligence-based crimes
AI crimes that fall within the “medium-concern” tier include the increasingly common scam of falsely advertising new products as being AI-based. Typical examples include advertising or security products.
Meanwhile, “low-concern” crimes include “burglar bots,” or tiny robots that can break into homes through letterboxes or cat flaps. Another low-concern threat is stalking via AI. While this is obviously going to be of high-concern for the people involved, it’s a low-level concern because it’s not a crime that’s going to impact a high percentage of the population.
“People now conduct large parts of their lives online and their online activity can make and break reputations. Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity,” explains first author, Dr. Matthew Caldwell. “Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”
“We live in an ever changing world which creates new opportunities – good and bad. As such, it is imperative that we anticipate future crime threats so that policy makers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur,” concludes professor Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL.
The study is published in Crime Science.