The AI can steal your passwords. The AI software could “listen” for distinguishing features of each key press, such as discrete sound wavelengths.
According to a recent study, malevolent persons might potentially employ artificial intelligence (AI) techniques to harvest user passwords with near-perfect precision. The study, conducted by Cornell University in the United States, found that when the AI program was enabled on a nearby smartphone, it was able to properly recreate the entered password with a 95% accuracy rate.
A team of computer experts from the United Kingdom collaborated on the study, training an AI model to recognize keystroke sounds on a MacBook Pro 2021 model. The AI technology demonstrated exceptional accuracy in “listening” to keystrokes picked up by the laptop’s microphone during a Zoom video call.
The AI algorithm was able to duplicate the keystrokes with an outstanding accuracy rate of 93%, setting a new record for this form of attack, according to the researchers. Furthermore, the researchers emphasized that many users are unaware that hostile actors might potentially record their typing in order to attack vulnerabilities and get unauthorised access to accounts. This is referred to as a “acoustic side-channel attack.”
What does an acoustic side-channel attack entail?
An acoustic side-channel assault is a sort of cyberattack that uses a computing device’s unwanted sound emissions or vibrations to obtain critical information. Side-channel attacks are a type of attack that takes advantage of information that is leaked while a cryptographic method is being executed, such as timing, power consumption, electromagnetic radiation, and, in this case, auditory signals.
An acoustic side-channel assault occurs when an attacker employs specialized tools or tactics to collect the acoustic emissions produced by a device while it is in operation. This could include keystrokes on a keyboard, mouse clicks, or even minor sounds created by internal components as they process data. These emissions can contain useful information on how the device works, such as the timing and sequencing of keystrokes or other user inputs.
An attacker can possibly deduce sensitive information such as passwords, PINs, or other secret data being entered by a user on the targeted device by analyzing the acquired auditory signals. This form of attack is especially troubling because users frequently are unaware that acoustic emissions can be used to compromise their security.
The study highlighted the pervasiveness of keyboard acoustic emissions, which not only present an easily accessible route for attacks but also cause people to underestimate the risk associated with these emissions and, as a result, fail to take safeguards. People, for example, shield their screens while typing passwords but pay little care to hiding the sound emitted by their keyboards.
To assess the AI program’s precision, the researchers methodically hit each key on the laptop 25 times, adjusting elements such as pressure and finger position for each press. The AI software could “listen” for distinguishing features of each key press, such as discrete sound wavelengths. The smartphone utilized for testing was an iPhone 13 mini, which was placed 17 centimetres away from the keyboard.