From Facebook overshares to accidental password posts on Twitter, there are many ways in which Web personas leak things of use to malicious hackers. But there’s one aspect of your online identity you might not expect to benefit ne’er-do-wells: your face.
This was made evident when researchers from an IBM cybersecurity division took my LinkedIn profile image for their own “nefarious” purposes. During a video call, they held up a laptop that, once the camera had taken a snap of my face and recognized it by comparing my live visage to the LinkedIn image, was infected with ransomware. (In this case it was a mock version of the infamous WannaCry malware.) This was facial recognition meets cybercrime in action.
The concept was simple but effective: If hackers want to target a specific person, in this case a journalist, they could harvest their images from social media. They could then infect a computer network and launch an attack when the target’s face was detected by the camera. They could’ve done the same with voice recognition or any other aspect of a person’s physical being that can be recorded by a computer.
The facial recognition-based attack was part of artificially-intelligent malware created by the IBM team dubbed DeepLocker. “DeepLocker is a new class of highly evasive and highly targeted malware that fundamentally differs from any malware that exists today,” Dr. Marc Ph. Stoecklin, principal research scientist for cognitive cybersecurity intelligence at IBM Research, told Forbes. The malware conceals its intent until the artificial intelligence within identifies the target via indicators like facial and voice recognition or geolocation.
Ultimately, the researchers want DeepLocker to help them understand the future of security and, possibly, cyberwarfare. “Things are going to be AI vs. AI in the future,” Stoecklin said.
Stoecklin and his colleagues Dhilung Kirat and Jiyong Jang have been researching how to combine AI with cybersecurity. Outside of DeepLocker, they’ve been exploring ways in which IBM’s famous Watson AI tech can assist security teams.
There are other ways LinkedIn photos can be used to feed facial recognition tech for malicious purposes. Another, as shown off by Trustwave’s Jacob Wilkins, is called Social Mapper. The tool harvests photos from LinkedIn and uses that to quickly find face matches across other social media sites like Facebook, Google Plus and Twitter. Social Mapper also looks for face matches across Russian site VK and China’s Weibo.
The ultimate aim of the software, which was open sourced on Wednesday, is to assist benevolent hackers who are employed to test the security of company networks. It should be especially useful for anyone trying out phishing attempts on targets, Wilkins told Forbes ahead of his talk at the Black Hat conference this week, where IBM is also detailing DeepLocker.
He admitted the tool isn’t insanely fast or always accurate. Social Mapper takes around 24 hours to run across an organization of 1,000 people, for instance. And, with its current settings, it has an accuracy of around 70%. “You can whack the threshold up super-high on it, but you might miss something,” Wilkins said.
Wilkins tested Social Mapper in a competition in Toronto that asked hackers to use their open source intelligence-gathering skills to help provide leads in real missing persons cases. His team ended up seventh out of 65 teams.
Is there much the average user can do to prevent being caught up in such attacks? For Social Mapper, it won’t work on profiles that aren’t linked to a company on LinkedIn, Wilkins said. He had some simpler advice to boot: “It’s not very social, but if you don’t use a photo of your face, it can’t correlate you across sites.”