In an Era of Fakes, How to Know When Someone Online Is Real

You’re on Facebook, LinkedIn or X and get a message. Maybe it’s from a stranger in your industry, maybe someone from your hometown claiming to know you from way back when. The person wants to reconnect or get your advice.

This could all be wonderful. Or it could be the start of a scam.

Unfortunately, security experts say, the latter is more likely, because personalized schemes to dupe internet users are on the rise. Trouble is, it is harder than ever to know whether that person showing up in your messages is real or not.

A check mark next to someone’s name on social media used to mean their identity had been verified. That’s now not the case on all sites. Artificial intelligence can help bad actors replicate the voices and appearances of strangers. Online transactions—such as selling furniture on Facebook Marketplace—are magnets for fraud, banks and security experts warn. And schemers are cozying up to people online and pretending to kindle romance to gain access to their money, a form of fraud called “pig butchering.”

The single best step to determine someone’s identity online and protect yourself is to slow down. Don’t rush to respond to an intriguing message. Instead do some vetting before taking things further. Tech companies are beginning to help, too, with Google, LinkedIn and Bumble introducing features to detect suspicious messages and users.

The stakes couldn’t be higher. U.S. consumers lost $1.1 billion in romance scams last year, according to the Federal Trade Commission, while business scams cost people $752 million.

Changing rules

The growing sophistication of scams means guidelines for operating online are changing. Some old rules still apply: Cross-check people and their credentials on legitimate sites such as their employer’s home page. Don’t click on suspicious links or continue the conversation on a different platform.

Earlier this year, after 27-year-old Callie Smith failed to get presale tickets to an Olivia Rodrigo concert, she started looking for resale tickets. Most cost about $700 apiece for a show in Washington, D.C.—too expensive for Smith, who works at a consulting firm and lives in Arlington, Va.

She came across an Instagram account offering 2-for-1 tickets. Smith and the seller exchanged phone numbers and video-chatted, and Smith sent $700 via Apple Cash. The woman immediately blocked her and never transferred the tickets. Smith couldn’t get her money back because instant peer-payment services have few fraud protections.

She googled the person’s name and phone number but found nothing beyond the Instagram profile. Had Smith searched before sending the money, the thin search results would have been a red flag, security analysts say. Using a different payment method with more fraud protections—such as designating the transaction as payment for a good or service in PayPal—also could have protected her. Apps for paying family and friends, including Apple Cash, should be treated like handing over actual cash.

“You just can’t trust everyone,” says Smith, who still hasn’t gotten tickets for the late-July show.

The hot seat

You used to be able to trust the sound of a voice you recognized on the phone, even if the person called from an unfamiliar number, says Roger Grimes, a senior computer-security consultant at KnowBe4, which trains companies’ employees how to recognize and respond to bad actors online.

AI can mimic people’s voices already, and is likely to keep getting more realistic. AI-generated deepfakes impersonating colleagues have tricked employees into handing over millions of dollars. In Hong Kong earlier this year, an employee gave $25.5 million to an attacker who used AI to pose as the company’s chief financial officer.

“With generative AI, you may absolutely recognize the voice, or you could recognize the face,” Grimes says. “It’s not science fiction, it’s actually happening.”

If you’re messaging with someone online, pepper them with questions to discern whether they’re human or a bot, says Patrick Long, a security analyst at research firm Gartner. Think about how the sender would have gotten specific information about you. For someone claiming to be family or a friend, ask questions the real person would know or things that can’t be easily found on social media—your mid-’90s senior-class prank, for instance.

Or, have a code word ready. Because AI can replicate voices, someone could call claiming to be your son—and even sound like him—asking for money. Before transferring funds, ask for your prearranged secret word.

If you’re messaging with job recruiters or potential love interests, ask about previous employers and schools they attended, and get names and descriptions of other people they know. Cross-check with other sources online—for recruiters, see whether they are listed on their firm’s website, for example.

If the answers don’t match up, end the conversation and block the sender.

You can cross-check a number by searching for it on Google, but pay close attention to search results. Just because a link appears as the top result doesn’t mean it’s a trusted source.

What companies are doing

Tech companies are taking steps to protect their users and in some cases, stop suspicious communication before it can reach you. Google is building a tool for Android phone users that alerts them when a potential scam is happening during a call. It detects conversation patterns commonly associated with fraud, such as a “bank representative” asking you to share your password.

Users will have to opt into the feature, and Google will share more information about it later this year, a spokesman says.

LinkedIn has a similar feature for messaging that detects potentially harmful content, such as when someone asks to leave LinkedIn to communicate on another platform.

Social-media services can police some activities of bad actors on their apps. But if, say, you end up texting with someone who initially messaged you on LinkedIn, you lose that backstop, security analysts say.

If there are red flags from someone you haven’t communicated with before, LinkedIn automatically sends the message to spam. If a message from someone known to you seems suspicious, it still shows up in your inbox with a warning. The feature is on by default for U.S. users.

Dating app Bumble’s Deception Detector tool uses AI to assess the authenticity of profiles on its dating app. Two months after launch, member reports of scam accounts fell by 45%, a Bumble spokeswoman says.

Before you pay

Always handle digital interactions with skepticism, especially when asked to do something out of the ordinary, such as sending a large sum of money, security analysts say.

Weigh the risks of the conversation, says Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation, a nonprofit digital rights group. A conversation with a stranger who is asking for your thoughts about Friday’s football game is different from someone asking you to send money to fund a business.

“Pay very close attention to what the ask is, what this person is trying to get you to do,” Galperin says.

Never share personal information like your Social Security number, and if money is changing hands, pay in a way that is protected. Apps such as PayPal and Venmo include purchase protections if you designate a transaction as payment for goods or services. Apple Cash doesn’t have such an option.

Before sending any money, such as to buy used furniture or concert tickets, let the new contact know the security steps you are taking, and note how the person responds.