Scientists warn of alien cyberattack

Published Feb 28, 2018

Share

You might not remember this, but the alien invasion in the 1990s sci-fi blockbuster "Independence Day" began not with laser blasts but with a cyberattack. As Jeff Goldblum's computer nerd character explains in the film, the alien fleet hacks into Earth's satellites, hijacking their communication systems to coordinate their (ultimately unsuccessful) assault on humanity.

To call that scenario far-fetched is an understatement. But a pair of astrophysicists say in a bizarre paper released this month that the possibility of an extraterrestrial hack - one far more sophisticated than the attack in "Independence Day" - is worth taking seriously. (How seriously to take the paper, which was published in an unconventional, non-peer-reviewed academic archive, is another matter.)

Michael Hippke of the Sonnenberg Observatory in Germany and John Learned of the University of Hawaii warn in their article that an alien message from outer space could contain malicious data designed to wreak havoc on Earth. Such a message would be impossible to "decontaminate with certainty" and could pose an "existential threat," they argue; therefore humans should use extreme caution.

Scientists, academics and futurists have long debated whether humanity would benefit from contact with extraterrestrial intelligence, or ETI. The Search for Extraterrestrial Intelligence Institute, the research organization looks for alien life, seeks a peaceful dialogue. Its researchers listen for communication signals from smart aliens and send out signals from Earth in hopes that another civilization might pick them up. So far, no one has heard anything very life-like.

Hippke and Learned's paper - which reads more like a thought experiment, not serious scholarship - ponders the dangers of receiving these theoretical interstellar missives.

"While it has been argued that sustainable ETI is unlikely to be harmful, we can not exclude this possibility," the researchers wrote in the article, which was first reported by Motherboard. "After all, it is cheaper for ETI to send a malicious message to eradicate humans compared to sending battleships."

The researchers envision several different types of malicious communications. A simple one might contain a threat like "We will make your sun go supernova tomorrow."

"True or not, it could cause widespread panic," they wrote, or have a "demoralizing cultural influence." A longer, more nuanced message could sow confusion and fear, especially if it's received by amateurs, according to the paper. The spread of such messages could not be easily contained, but they could at least be printed out and examined on paper, and wouldn't necessarily require a computer to decipher.

But large, complex messages written in code would.

Messages that contain big diagrams, algorithms or equations could come with viruses hidden in them, the researchers say. They couldn't be printed out and examined manually, so they'd have to be deciphered on a computer, the paper speculates.

The messages could also be compressed in the same way personal computers compress large files for more efficient transfer, and the algorithm needed to decompress them could also be code. Executing those billions of decompression instructions could unleash the malware, according to the paper.

In their most out-there example, Hippke and Learned imagine a sort of extraterrestrial spearphishing, the technique human hackers sometimes use to gain personal information from victims under the guise of a trustworthy source. Russian hackers likely used this technique to gain access to the Democratic National Committee's computer networks.

As the researchers write in their paper, the header of such a message might read: "We are friends. The galactic library is attached. It is in the form of an artificial intelligence which quickly learns your language and will answer your questions. You may execute the code following these instructions. . ."

Extraordinary steps could be taken to isolate the artificial intelligence - the researchers even suggest building a computer on the moon to execute the code and rigging it with "remote-controlled fusion bombs" to destroy it in case of an emergency.

This idea is known as an "AI box," essentially a solitary confinement cell for an artificial intelligence. Experts have long discussed it as a way to contain a potentially dangerous artificial intelligence. Some have argued that a sufficiently advanced computer program could easily manipulate its human guards and find a way out of the "box."

Hippke and Learned say efforts to imprison an artificial intelligence delivered by extraterrestrials would probably fail. Even a military-style experiment could go awry.

"Current research indicates that even well-designed boxes are useless, and a sufficiently intelligent AI will be able to persuade or trick its human keepers into releasing it," Hippke and Learned wrote.

For instance, the researchers say, the artificial intelligence could offer a cure for cancer in exchange for an increase in computer capacity.

"We could decline such offers," the paper posits, "but shall not forget that humans are involved in this experiment. Consider a nightly conversation between the AI and a guard: 'Your daughter is dying from cancer. I give you the cure for the small price of. . .'. We can never exclude human error and emotion."

Once the artificial intelligence is out, it could do all kinds of damage, like dupe us into building nanobots that could take over the world, according to the paper. Other forms of annihilation could await if the machine decides humans are "as irrelevant as monkeys," Hippke and Learned say.

But the paper ends on an optimistic note. The researchers suggest that a message from extraterrestrials is likely to be benign. Understanding the risks is what's important, they say.

"The potential benefits from joining a galactic network might be considerable," the researchers wrote. "Overall, we believe that the risk is very small (but not zero), and the potential benefit very large, so that we strongly encourage to read an incoming message."

The Washington Post 

Related Topics: