Tech News: AI gathering of personal information can make you a phishing target

Due to the growing popularity of artificial intelligence (AI) in appliances, devices and everyday applications, the misuse and exploitation for criminal purposes have blossomed. Photo: File

Due to the growing popularity of artificial intelligence (AI) in appliances, devices and everyday applications, the misuse and exploitation for criminal purposes have blossomed. Photo: File

Published Aug 21, 2020

Share

DURBAN - Due to the growing popularity of artificial intelligence (AI) in appliances, devices and everyday applications, the misuse and exploitation for criminal purposes have blossomed.

AI provides a real threat in the form of AI facilitated crime:

•As a tool for crime, such as theft, intimidation, terror, extortion, generating fake content used in blackmail, audio/video impersonation through “deepfake” content, and tailored phishing through the AI gathering of personal information;

• As a target for criminal activity, where AI systems are targeted by criminals, such as evading detection, exploitation of vulnerabilities, attempts to bypass protective AI systems, making trusted or critical systems fail or behave erratically, using AI driverless cars as kinetic weapons, and voter and stock market manipulation; and

• As a context for crime, where fraudulent activities might depend on the victim believing that some AI functionality is possible.

Unfortunately, some of the strengths of AI – its ability to learn by itself, to act autonomously, to deal independently with tasks previously requiring human intelligence and intervention, and to make its own decisions – could be exploited by criminals, thus creating a possible threat.

Some deep learning systems that have been studied, could be fooled by an adversary who has prior access to the software. In this case AI is used to find its own hidden weaknesses. These minute adversarial perturbations of the input to the system are then used to manipulate the output.

Bots, fake content and harassment

Potential criminals could also undermine social bots, by biasing its learned classification and generation data structures via user-interaction. This is what happened with Microsoft’s unfortunate Twitter bot “Tay”, which quickly learned from user-interactions to direct “obscene and inflammatory tweets” at a feminist-activist, leading to complaints of harassment.

This problem is further complicated by the assigning of liability. Does the liability lie with Microsoft who released Tay on Twitter, a platform known for its problems with harassment, or also the users who did not use the technology according to its design? Or should the intention or knowledge of wrongdoing (the so-called mens rea) of the crime be considered, since attribution of intent is a function of engineering, application context, human–computer interaction, and perception?

AI has developed so exponentially that sophisticated fake content can be developed with ease. Software exists that can produce synthetic videos based on a real video, where the original “actor’s” face has been replaced with another.

The new face is not merely copied and pasted from pictures but synthesised by a generative neural network after it has been trained on videos that feature the new person. A senior citizen could be tricked into making financial transfers after a video chat by an apparent trusted party.

Quite often, many of these synthetic videos are pornographic, increasing the risk of synthesised fake content in order to harass victims. Other synthetic videos are very political and could perhaps even damage sensitive diplomatic relationships or start a war when the face of a president or leader is misused.

Theft, fraud, forgery and impersonation

AI is often used to gather personal data, which is then used in combination with other AI methods to forge an identity and gain the trust of a third party.

The first method to gather personal data involves the use of social media bots to target users at large scale and low cost, by exploiting the capacity of the AI bots to generate posts, impersonate people, and consequently acquire trust through friendship requests or “follows” on sites like Twitter, LinkedIn, and Facebook. When a user indiscriminately accepts friendship requests, the criminal gains personal information only available to friends. Unknowingly, users often add bots to their friendship circle, thereby compromising their privacy. Identity-cloning bots have succeeded in having 56 percent of their friendship requests accepted on LinkedIn.

The second method for gathering personal data makes partial use of conversational social bots for social engineering. AI is used to manipulate behaviour by bonding with a victim and then exploiting the relationship to obtain information from or access to their computer. Using a misleading social botnet to discover vulnerable individuals, criminals use the AI-based manipulation to collect harvested personal data and re-use it to produce intensified attacks of simulated familiarity, empathy, and intimacy, leading to greater personal data revelations.

The third method for gathering personal data from users is the notorious automated personalised phishing (known as spear phishing). Machine learning techniques are used to craft messages personalised to a specific user.

The fourth method of AI snooping entails the use of phones, smart televisions and Home Hubs for the gathering of information. Speech Recognition software is used to sift the gathered data for exploitable fragments (e.g. passwords or bank details; affairs being admitted to).

Once the criminal gathered sufficient personal data through the use of AI, AI is used to forge an identity. Using machine learning, Adobe’s voice synthesis software is able to learn adversarially, and imitate someone’s individual speech pattern from a 20 minute recording of his or her voice. Advanced AI supported voice synthesis software poses a real threat to biometric security processes often used to unlock devices, doors, safes and vehicles.

Crime prevention

Contrariwise, AI also has tremendous potential for crime prevention, such as machine perception (the capability of a computer system to interpret data in a manner that is similar to the way humans use their senses) in vehicle tracking, person recognition and X-ray threat detection.

In addition to the technological AI solutions for fraud detection and prevention, there are a number of legal solutions that can be considered in order to address the issue of AI facilitated crime. Legal solutions may involve limiting AI agents’ autonomy or their deployment for example the use of autonomous cars. If legislation does not forbid higher levels of autonomy for a specific autonomous agent, the law should oblige that this freedom is linked to technological remedies to prevent nascent criminal acts once deployed.

Another possibility is to obligate developers to deploy autonomous agents only when they have run-time legal compliance layers, which take declarative specifications of legal rules and impose constraints on the run-time behaviour of their autonomous agent. However, this shift from regulation to regimentation would implement the “code-as-law” concept, which considers software code as a regulator in and of itself by saying that the architecture it produces can serve as an instrument of social control on those that use it.

Alternatively, social simulation could be used as a test bed before deploying autonomous agents. It would be possible in a market context that regulators could act as “certification authorities”, testing new trading algorithms in a system-simulator to assess their likely impact on the system before allowing the developer of the algorithm to run it live.

I addition to the above preventative methods, AI criminality monitoring is of the utmost importance. AI criminality predictors using domain knowledge could be used effectively to predict identity theft by notifying users whether the location of the “friend” that is messaging them meets their expectation. AI could also be used to discover crime patterns but is limited by the current capability to link offline identities to online activities.

Perhaps a very useful approach is to address traceability by embedding hidden clues in the components that make up AI tools used by criminals. For instance, Adobe’s voice replication software places a hidden watermark in the generated audio. However, lack of knowledge and control over who develops AI tools limits traceability via watermarking and similar techniques.

The future

Powerful AI applications have arrived, delivered by advances in machine learning to autonomously build algorithms from data; deep learning to do it like the human brain; and powerful computers to do it fast and cheap. These powerful AI applications have been effectively used to the benefit of society. Unfortunately, the very same powerful AI applications could be used by criminals. The future will see even more powerful AI applications - their use for good or evil will depend on our ability to timeously take preventative measures.

Professor Louis C H Fourie is a futurist and technology strategist

BUSINESS REPORT ONLINE

Related Topics: