Much of today’s communication takes place through texting, social media and messaging apps. Whether with people in our network or a website chatbot, most of us have grown to enjoy the convenience of electronic communication.
Those online chatbots on your computer or phone are driven by artificial intelligence (AI) technology that is meant to help get you what you need. These bots are programmed to predict logical steps in each process, responding through a combination of pre-programmed scripts and machine learning algorithms.
AI-powered chatbots offer several benefits. By not having to wait for a representative, you may get lucky enough by resolve your issue quickly, and bots are available 24/7.
But these AI-driven interfaces also have a dark side: the scams.
AI-Powered Scams
These bots cannot show emotion and have limited functionality. Their automated and predicted responses, however, allow humans to use this technology to deceive others. I recently read an article about ChatGPT, email scams, and “pig butchering,” a phrase used to describe financial scams. According to an affidavit in a Los Angeles case involving more than $112 million, “The victims in Pig Butchering schemes are referred to as ‘pigs’ by the scammers because the scammers will use elaborate storylines to ‘fatten up’ victims into believing they are in a romantic or otherwise close personal relationship.”
To start, the bad guys create professional-looking emails with varying ploys: telling recipients they’ve won or inherited money, offering a chance-of-a-lifetime deal, or even playing on sympathy to request help for themselves or their children. If we crack open the door, they test to see if we will bite. Are we giving up any information with our responses? Once they see a target is engaged and interested, the bad guys take over from the automation, much like a customer-service agent.
Victims may be led to fraudulent websites, where they divulge their personal information and/or pay for a service or product, the coveted payoff that sends money into the scammer’s account. One scammer can send out numerous chatbot phishing emails while they sit back, relax, and wait to see who takes the bait. The odds are in their favor.
Again, it is easy for AI to start weeding out those who won’t fall for the email/phone scam so humans can take advantage of more vulnerable prey.
Protect Yourself from AI Scams (and Other Threats)
While many of these tips may look familiar, a timely reminder could prevent you from falling prey to increasingly sophisticated scams:
- Never give up names, phone numbers, or addresses through a link. Protecting personal information is a critical first step.
- Don’t respond to unsolicited messages via text or social media.
- Check the return email. It’s usually slightly—if not completely—different from a legitimate company address. Beware of messages from a free email account (e.g., Gmail, AOL, Yahoo).
- Don’t click on links in emails from unknown senders.
- Don’t continue to chat with people who won’t video chat. This is a telltale sign they are not who they are claiming to be.
- Do not discuss your financial status or any related concern, even if the other party claims to have your file and needs to verify.
- If you start feeling pressured or uneasy, RUN before it is too late.
- If it sounds too good to be true, it likely is.
- Use your computer anti-virus protection services. These programs are implemented for your safety and security.