There’s no question that Artificial Intelligence (AI) is a tool that can swindle millions. From “deep fake” images to robo scams, it’s already out there fleecing people. Is there anything you can do to protect yourself from robo-thievery?
Government agencies are painfully slow to address this threat. I recently attended a panel in Washington to explore concerns on AI. The only sure conclusion I have is that AI scams will focus on older Americans.
“Older adults will be the targets,” noted Rohit Chopra, executive director of the Consumer Financial Protection Bureau (CFPB), at a recent conference. “We’re in a cautious moment because the harms are very real.”
Chopra noted that AI is already being used in customer service “chatbots,” although “it’s not intuitive that people are talking to robots.”
The perils of AI are not lost on a small group of legislators. Senator Jack Reed (D-R.I.) said in a statement “as AI evolves and becomes more prevalent and sophisticated, bad actors are trying to take advantage using ‘spoofed speech’ and other techniques. It is important for people to be educated and on guard against these scams. The FTC and CFPB need to take action to help thwart this type of fraud.”
It doesn’t appear, given the dysfunctional leadership in the U.S. House of Representatives, that will we see any federal AI safeguards in the near future. In the interim, you will need to be vigilant in spotting common AI deceptions highlighted in a recent CFPB report:
- Financial institutions are increasingly using chatbots as a cost-effective alternative to human customer service. Our review found that each of the top 10 largest commercial banks have deployed chatbots as a component of their customer service. As chatbot technology has evolved, so too has banks’ use of the technology. Banks are moving from simple, rule-based chatbots towards more sophisticated technologies such as large language models (“LLMs”) and those marketed as “artificial intelligence.”
- Chatbots may be useful for resolving basic inquiries, but their effectiveness wanes as problems become more complex. Review of consumer complaints and of the current market show that some people experience significant negative outcomes due to the technical limitations of chatbots functionality. There are many kinds of negative outcomes for the customer, including wasted time, feeling stuck and frustrated, receiving inaccurate information, and paying more in junk fees.
- Financial institutions risk violating legal obligations, eroding customer trust, and causing consumer harm when deploying chatbot technology. Like the processes they replace, chatbots must comply with all applicable federal consumer financial laws, and entities may be liable for violating those laws when they fail to do so. Chatbots can also raise certain privacy and security risks.
The bottom line: Be careful. If you need to resolve a problem, try to get a real person on the phone. Most large enterprises still have customer service lines. While they may not be ideal, they are a start.