Sat, Dec 21 2024
The frequency of indiscriminate attacks in 2022 and 2023 varied from 50,000 to 100,000 per month. Biometric identity solution supplier iProov has released its most recent study, which shows that at the same time, the number of groups exchanging information relevant to attacks against biometric and remote human identification increased dramatically.
Due to the seemingly endless possibilities of this cutting-edge technology, the hype surrounding artificial intelligence (AI) has persisted from 2023 until the present. Threats that weaponize these advances have, however, developed swiftly alongside the AI space.
There is an increasing need for more secure remote identity verification due to easily accessible and criminally weaponized generative AI tools. Threat Intelligence Report 2024: The Impact of Generative AI on Remote Identity Verification, a new threat report from iProov, claims that malevolent actors are using sophisticated AI tools to conceal the presence of virtual cameras in biometric systems through emulators and convincing face swaps, making it more difficult for biometric solution providers to identify deepfakes.
An emulator is a piece of software that simulates a user's device, like a smartphone. As a result, face swaps and emulators are now the go-to tools for attackers looking to commit identity theft, according to iProov, who calls this "the perfect storm."
Threat actors began using emulators and metadata spoofing to undertake digital injection assaults on various platforms in 2022, but this activity really took off in 2023, with a 353 percent increase from H1 to H2 of that year.
These attacks are developing quickly and present serious new risks to mobile platforms; for example, from H1 to H2 2023, injection assaults against the mobile web increased by 255%.
The use of bundled AI imagery tools, which make it much simpler and faster to launch an assault, was also disclosed by iProov, and this trend is only predicted to continue. Between H1 2023 and H2 2023, there was a 672 percent rise in the use of deepfake media, including face swaps combined with metadata spoofing tools.
Trends in emerging threats
"Generative AI has greatly increased threat actors' productivity levels," said Andrew Newell, chief scientific officer at iProov. "These tools are easily accessible, reasonably priced, and can be used to create highly convincing synthesised media, such as face swaps or other deepfakes that can easily fool the human eye, as well as less sophisticated biometric solutions." The necessity for extremely secure remote identity verification is only made more pressing by this.
"We don't know what's next, but the data in our analysis shows that threat actors now use face swaps as their deepfake of choice. The only way to stay one step ahead of them is to keep a close eye on their attacks, their frequency, the people they are targeting, the techniques they employ, and the reasons behind them in order to develop a set of theories.
Presentation assaults and digital injection attacks are the two main attack types that the iProov Security Operations Center has seen. While the impact of presentation and digital injection assaults may vary, they can be very dangerous when paired with other conventional cyberattack techniques like metadata manipulation.
Case studies on anonymous prolific threat actor identities are also included in the study. Every case study assesses how sophisticated each actor's tactics, efforts, and attack frequency are. This study offers priceless information and helps iProov maintain continuous security improvements to its biometric platform, hence reducing the danger that organizations may exploit current and future remote identity verification transactions.
Leave a Comment