Greetings from Lagos, Nigeria! I’m here to meet up with a very wealthy prince who is generously willing to share in his vast fortune with me if I can just “do the needful” and give him some innocuous information about myself.
I’m kidding, of course. It would obviously not be a career enhancing move for this armadillo to fall for such a circa 1990s era email scam. In the unlikely event you’ve never received one of these Nigerian prince emails, the scam works thusly: a scammer (not necessarily based in Nigeria) initiates a phishing attack to thousands of email address and promises a large sum of money in return for some help. Victims who respond to the initial email are usually asked to make an advance payment or share their personal details to get their much larger reward, which never comes of course. (Fun fact: By the year 2100, Lagos will be the largest city in the world with 88 million people, so there stands to be a near infinite number of “do the needful” requests coming online the decades ahead. Oh Joy!
Interestingly, this kind of phishing scam has its origins in the gold ol' days of snail mail. Back then, this scam was called the “Spanish Prisoner,” and the con was that someone in a Spanish jail had knowledge of a vast fortune buried somewhere. The scammer would send out letters to those who might be able to “send bail money” so that they could be released and then share the larger fortune with the victim.
Back to the more modern world of email. It used to be that these prince emails were so poorly executed that they were fairly easy to spot and avoid. There were dead giveaways like inconsistent font and point size for the text, odd colors, and comically nonsensical writing.
Unfortunately, cybercriminals using generative AI are changing all this, and the funny cybercriminal lure that you were proud to be able to spot in the past is now much more sophisticated.
How will they do it?
Many will use software overlays sold on the dark web by other cybercriminals (DarkBERT for example) that make it easy to identify a target, find some context about them (e.g., past written emails, voice recordings, or videos). They will then drop that data into ChatGPT, et voilà, they get an optimized luring email that sounds, reads, or even looks like the person they are posing as.
Now, why wouldn’t ChatGPT by OpenAI (and essentially Microsoft) just shut these guys down? Well, cybercriminals can be quite clever.
They are not actually paying OpenAI for the premium ChatGPT licenses. They go out and find other people’s ChatGPT accounts with weak passwords, copy those account credentials, and then use those credentials to hook up DarkBERT, FraudGPT, WormGPT, etc., to a premium ChatGPT license. So, their nefarious uses are co-mingled with legitimate users’ use, payments, and credentials. As of May 2023, a security source reported that there were 100,000 infected devices with saved ChatGPT credentials, so there are surely many more now.
Simply put: It’s YOU against AI for the Dark Side. You’ll be taken for, unless you have AI for the Good Guys. Think RPost! RMail empowers you to eavesdrop on the cybercriminal eavesdroppers and hack the hackers, so you can pre-empt their actions. With RPost’s latest Email Eavesdropping™ detection service, RPost can empower you and your IT staff to be like the white-hat spy so you can eavesdrop on the cybercriminal black-hat hacker eavesdroppers—essentially, giving you visibility as to who, if, and when the cybercriminals are eavesdropping on correspondence between you and your email recipients.
When you add this Email Eavesdropping™ detection technology to RSign eSignatures, RMail secure large file share, and RDocs document right and controls services, we can help the white-hat spy (you or your IT guru) sleep better at night, as this RPost tech is asking our AI to simply auto-lock files that black-hat cybercriminal eavesdroppers are trying to access!
Now that is true PRE-Crime protection…pre-empting cybercrimes in progress with RPost AI. Don’t hesitate to do the needful and contact us to learn more.
December 13, 2024
December 09, 2024
December 03, 2024
November 29, 2024
November 20, 2024