Artificial Intelligence (AI) focuses on creating intelligent machines that can perform tasks that typically require human intelligence, such as understanding natural language, recognising images, making decisions, and solving complex problems.
Few possible malicious and criminal use of Al-powered deepfakes (a) Harassing or humiliating an individual and destroying their image and credibility (b) Luring employees into sending money or revealing privileged information (c) Duping online systems that validate customer by fooling Know Customer i.e., KYC mechanisms (d) Manipulating electronic evidence for criminal justice investigations. (e) Supporting disinformation campaigns and stoking social unrest and political polarisation. (f) Using AI tools for guessing users’ passwords.
Sample modus operandi of AI based deep fake scam:
AI-based phoney video call frauds may be especially persuasive and manipulative, so consumers must be wary and watchful online. Here are some methods to avoid being a victim of such scams.
- Victims are chosen randomly, social profiling is done, and then the scammers start communicating with the victims social networks.
- The scammer impersonates a known person and requests financial assistance by repeating false stories (i.e., using AI-based deep fakes) and convincing their victims to transfer large sums of money.
- Excuses the scammers might use to deceive the victims are: (a) I have been robbed. (b) I need money for medical emergencies. (c) I have met with an accident.
How AI can be misused for cyber-criminals :
- Deepfake Attacks – AI can help create deep fakes (synthetic videos and fake virtual identities) that appear all too realistic which can be used to impersonate individuals, leading to misinformation and reputation damage or spreading misinformation.
- Voice Vishing – Cyber Criminals can clone human speech and audio to carry out advanced types of voice phishing attacks. (i.e., Can impersonate victims into transferring money on the pretext of a family emergency.)
- Credential Stuffing – It’s a method in which attackers use lists of compromised user credentials breach into a system by automating the process using AI.
- Malware Development – AI can be used to create hi-tech malware that can evade traditional security measures and adapt its behaviour based on the target the environment operates.
- AI-powered Botnets – AI-powered botnets can mimic human behaviour, effectively blending into the digital landscape
- Social Engineering Attacks – Using AI tools, cybercriminals can draft extremely sophisticated emails that appear as though a human wrote them (i.e., craft more convincing social engineering attacks.)
- Disinformation & Misinformation – AI can automate the creation and spread of disinformation or misinformation on a large scale, which can be used to manipulate public opinion or destabilise societies.
Combating AI-based cybercrime by security professionals:
- Behavioural Biometrics – AI helps to identify untrustworthy activities by analysing parameters i.e., user’s keystrokes when typing, navigational patterns, screen pressure, typing speed, mouse or mobile movements, lip sync and gyroscope position etc
- Malware Detection – AI and ML can analyse large amounts of data to identify patterns and abnormalities that are challenging for humans to detect malware threats.
- Threat Detection – AI algorithms can help us process and analyse vast datasets i.e., network traffic logs, system logs, user behaviours, and threat intelligence feeds.
- Predictive Analysis – The process uses data analysis, machine learning, artificial intelligence, and statistical models to find patterns that might predict future behaviour which can enhance reliability of IT assets by avoiding breakdowns, delays, and disruptions.
- Natural Language Processing (NLP) – It can be used to identify phishing emails by analysing the text of the email. i.e., Identify emails that contain certain keywords or phrases that are commonly used in phishing emails, such as “urgent,” “important,” or “click here.” Etc
- Email Authentication Protocols – Implement email authentication protocols like SPF, DKIM, and DMARC to prevent email spoofing.
Few tips while dealing with AI based crimes:
- Code Word – Set a verbal or numerical codeword with kids, family members, or trusted close friends and Make sure it’s one only you and your important family members.
- Question the source – Cyber criminals have access to tools that can spoof phone numbers and make them look real, so therefore its recommended you stop, pause, and think even if it’s a voicemail or text from a known number.
- 2FA – Implement two-factor authentication (2FA) on your social media and email accounts, as it restricts unauthorised access.
- Passwords – Use complex and strong passwords which are hard to crack, with lower and upper case, numerical and special characters.
- Unsolicited Emails & SMS – Be cautious while clicking on short links received (via emails/SMS/messenger) Open them only after you validated if it’s a phishing link or a good link.
- Unsolicited Contacts – If you receive unexpected contact or offers that seem too good to be true, be cautious. AI can be used to create very convincing scams.
- Phishing – Emails and Links – AI-based attacks might be more sophisticated and harder to recognise the links being sent, so always verify the authenticity of requests for personal information or money.
- The imposter can be asked to turn his/her face during conversation.
- Make the imposter on the other end to wave hands during conversation.