Basil Jarrett | AI scams and deepfakes: The new threat to law enforcement
AN ONLINE cryptocurrency scam made headlines on Tuesday, luring unsuspecting investors with the promise of hundreds of thousands of dollars to be made from the platform. As anyone in law enforcement will tell you, this is nothing new as fake websites and online scamming platforms seem to pop up just about everywhere these days. Except this time, this particular scam used a number of well-known Jamaican public figures to make shocking statements of endorsement.
The website quickly went viral, stirring up confusion and bewilderment, as none of the persons used in the site’s promotion claimed any knowledge of the platform, much less the amounts of money that were said to be made. This, despite their images and likeness being plastered front and centre issuing ringing endorsements. Michael Lee-Chin, Cliff Hughes, Dionne Jackson Miller, Zach Harding, and Chris Zacca were some of the fake endorsements promoting the sham cryptocurrency site.
DEEPFAKES AND AI
Welcome, Jamaica, to the world of fakes and deepfakes, where artificial intelligence can create highly realistic but entirely false content. As these technologies become more sophisticated, they begin to pose significant threats, not just for your hard-earned cash, but also for law enforcement and cybersecurity sleuths. Welcome, again, to the brave new world of artificial intelligence (AI) scams and deepfakes.
As a crisis communications consultant and law-enforcement professional, I’ve seen the massive and disruptive potential of new technologies. AI scams and deepfakes represent the latest frontier, bringing with them challenges that can not only undermine public trust and compromise investigations, but also strain law-enforcement resources. Undoubtedly, AI has revolutionised many aspects of our lives, from personalised recommendations on social media to advanced medical diagnostics. But like all powerful tools, AI has a dark side. Deepfakes use AI to create hyper-realistic fake videos, audio, and images that can be incredibly convincing, making it difficult to distinguish between real and fake.
AI scams, on the other hand, leverage machine-learning algorithms to conduct sophisticated frauds. From phishing emails that are nearly indistinguishable from legitimate communication to automated voice calls that mimic real people, these scams are becoming increasingly hard to detect and defend against. This has profound implications for law enforcement, as deepfakes can be used to spread misinformation, incite violence, discredit public officials, and easily target vulnerable individuals. The challenge for law enforcement, therefore, is to stay ahead of these rapidly evolving threats, while maintaining public trust and confidence.
THE THREAT TO PUBLIC CONFIDENCE
One of the most insidious aspects of deepfakes and AI scams is their potential to undermine public trust and public confidence. Consider, for example, a deepfake video showing a high-ranking police officer accepting a bribe or making some inflammatory statement about the JCF. Even if the video is proven fake, the damage to public perception can be long-lasting, especially here in Jamaica where trust in law enforcement is often a delicate issue. Public trust is the cornerstone of effective policing and without it, cooperation between the community and law enforcement often breaks down, making it harder to maintain law and order.
Deepfakes and AI scams also pose direct threats to law-enforcement operations, especially in circumstances where an investigation relies on video evidence. A deepfake could introduce false leads, and waste valuable time and resources. Similarly, AI-generated phishing attacks could compromise sensitive information and jeopardise ongoing investigations. As such, there is an urgent need for law-enforcement agencies to develop tools and strategies to identify and counteract deepfakes and AI scams.
STRAINING RESOURCES
But dealing with AI scams and deepfakes, especially given the rapid advancement of the technology, requires significant resources. Law-enforcement agencies now need to invest in advanced technologies and training personnel to recognise and respond to these threats. Here, resources are often stretched thin, and the additional burden of combating AI scams and deepfakes can be overwhelming. It’s essential, therefore, to find cost-effective solutions that can be implemented without diverting resources from other critical areas. Advanced software tools that can detect deepfakes by analysing inconsistencies in video and audio files are therefore gaining popularity in law-enforcement circles as cybersecurity personnel try to stay ahead of the curve.
PUBLIC AWARENESS CAMPAIGNS
But the solution to AI and deepfakes lies not only with law-enforcement professionals, but also with the education and awareness of the public about their dangers. By raising awareness, law enforcement can help individuals to recognise and report suspicious activity, thereby reducing the effectiveness of these scams. In Jamaica, the fight against AI scams and deepfakes must be tailored to local realities, and so community engagement is vital. Law-enforcement agencies should therefore work closely with community leaders to build trust and educate the public about the risks associated with these technologies.
It is also important to update our laws to address the unique challenges posed by AI scams and deepfakes, as clearer legal frameworks for the creation, distribution, and use of deepfake technology are critically needed.
INTERNATIONAL COOPERATION
Another important area of focus in tackling cyberthreats and cybercrime is cross-border cooperation. This week, for instance, as a testament to the power of global collaboration and advanced cybersecurity measures, the Major Organised Crime and Anti-Corruption Agency (MOCA), in partnership with US Federal law enforcement and Nigerian authorities, successfully apprehended two cybercriminals in Lagos, Nigeria, after they were involved in a sophisticated cybercrime operation impacting the National Water Commission. Essentially, cybercriminals had used a business email compromise cyber technique to illegally route NWC funds to the criminals. Fortunately, the NWC had a good detection and notification system that allowed them to alert MOCA, who in turn collaborated with US Federal and Nigerian law-enforcement partners to arrest the suspects. And this is exactly how you tackle this sort of crime, given its borderless nature. International cooperation is a key ingredient in tackling AI scams and deepfakes effectively.
ADAPTING TO THE NEW REALITY
Without a doubt, AI scams and deepfakes represent a new frontier in the realm of cyberthreats. For law enforcement, adapting to this new reality is not just about embracing technology, but also about building resilience, fostering trust, and ensuring that the community is informed and vigilant, because as time goes by, incidents like Tuesday’s celebrity crypto scam are going to become more and more commonplace.
Major Basil Jarrett is a communications strategist and CEO of Artemis Consulting, a communications consulting firm specialising in crisis communications and reputation management. Visit him at www.thecrisismajor.com. Send feedback to columns@gleanerjm.com.