In 2016, two scientists from ZeroFOX got together and conducted an experiment to see how many Twitter users they could get to click on malicious links. The kicker: one was operating as a human hacker, the other was artificial intelligence. The classic battle between man and machine.

The AI was taught to study the behaviour of social network users and then design and implement its own phishing bait. The AI, named SNAP_R (Social Network Automated Phishing with Reconnaissance) sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims.

The artificial hacker performed substantially better than its human counterpart, composing and distributing more phishing tweets than humans with a substantially better click through rate. At Black Hat USA 2017, 62% of hackers said that AI would be used for offensive purposes in the coming year. I think it’s safe to say that AI will be playing a prominent role in the future of hacking.

But what can we do to combat artificial intelligence? At this point it’s mainly speculation, but one thing is certain: As hackers continue to evolve their means of attacking systems, we are evolving the means to protect them.

Jacob McLaurin, Threat Intelligence Analyst at Foregenix, gave his opinion on what the future may hold for AI.

In the event that attackers are able to develop a malicious AI before cybersecurity teams can build a protective one, it could cause a huge problem for the industry. It’s inevitable that one will be developed, but would we be ready for it when it arrives?

Cybersecurity teams are usually on the backfoot when defending people from whatever malicious software that attackers can come up with. Baring this in mind, I think it’s likely that it’ll be the bad guys that use AI to attack first. When they do, there’s not going to be much to defend against them. It’ll be a big problem.

If an attacking AI progresses to the point that it can process things as a human being does, like knowing where it is within a network and making decisions based on that, it could cause havoc. Not to mention, what if it decides to operate independently from what the attacker has designed it to do?

In the end I think it’ll come down to AI vs AI and who can build the better machine. Whoever is able to build the strongest one is essentially going to win.

This may sound like a dystopian vision of a distant future but in reality, it’s more science than science fiction. Konstantinos Karagiannis, chief technology officer of BT Americas has been building a system that uses neural networks to simulate the way a human brain learns and solves problems. He’s previously been quoted as saying:

“Security scanners could identify intricate flaws using creative approaches you would have never thought of. It can be written with very modest hardware. A $1,000 GPU (Graphics Processing Unit, typically used in gaming) can outrun a supercomputer that used to fill a building 10 years ago.”

It would seem that the cybersecurity industry needs to be proactive in building AI in order to defend against the perceived threat from the enemy. Is a Terminator/Skynet esque future on our doorstep? Probably not, but AI hacking certainly is.