A poker-winning machine is no threat

AP Photo/John Locher, File

AP Photo/John Locher, File

Published Feb 4, 2017

Share

I didn't worry too much when computers beat humans at

checkers, chess, or Go. It was, after all, only a matter of time

before someone built a powerful computer with a vast database

of known game situations. But now that machines are beating professionals

at poker - a game of imperfect information - a question must be asked:

Is artificial intelligence starting to threaten people in creative jobs?

In terms of game complexity - the number of

allowed positions reachable in the course of a game - no-limit Texas

Hold'em poker, isn't an AI researcher's worst nightmare. The chess game tree

has 10 to the 120th degree nodes. The one for Go has 10 to the 170th degree.

Two-player, no-limit Texas Hold'em is in between with 10 to the power of 160

possible decision points. There are methods of making the game-tree complexity

manageable in a real-time game, often based on disregarding what led to a

particular position and reducing calculation depth for future positions, and

they've been successfully implemented.

But in poker, the imperfect information creates a whole

extra layer of complexity. As Matej Moravcik and his team of Canadian and Czech

researchers wrote in a January 2017 paper describing DeepStack, a piece of

software they developed that bests professional poker players:

The correct decision at a particular moment depends upon

the probability distribution over private information that the opponent holds,

which is revealed through their past actions. However, how the opponent’s

actions reveal that information depends upon their knowledge of our private

information and how our actions reveal it. This kind of recursive reasoning is

why one cannot easily reason about game situations in isolation, which is at

the heart of local search methods for perfect information games.

In other words, it's hard to reduce poker to a

workable abstraction without compromising on the level of play. Two

competing groups, however, appear to have overcome that problem lately:

Moravcik's and another one, from Carnegie Mellon University, which hasn't

published a description of its winning program yet. Members of that group,

however, have provided pointers to what they did in their previous work. 

Read also:  #WEF2017: AI and what it means for humans

The language in which DeepStack creators describe their

software is disturbing to anyone worried about being edged out by machines.

Moravcik and his team wrote that DeepStack had "intuition" - an

ability to replace computation with a "fast approximate estimate."

The machine developed it through "training" on lots of random poker

situations. It worked well enough consistently to beat 33 pro players from 17

countries.

Libratus, the Carnegie Mellon team's product, is

apparently based on different principles, using more precise calculations in

the final part of the poker hand than in the early stage. It has beaten four

top poker players, who came away in awe: The software managed to remain

unpredictable and keep winning. Among other techniques, it varied the size of

its bets to maximize profit in a way even the best human player finds it

too taxing to imitate.

The good news for humans, however, is that even with all

the complexity-reducing shortcuts the researchers have developed, beating a

good poker player requires a huge amount of computing power. Deep Blue, the IBM

machine that beat Gary Kasparov at chess, was a 32-node high performance

computer. Libratus used 600 nodes of a supercomputer, the equivalent of

3 330 high-end MacBooks. It would take far more, and probably more ingenious

shortcuts, to create an artificial intelligence that can win real-life,

multi-player poker games at a high level. 

It never pays in AI research to say that something is

impossible. The field is fast developing and claiming successes that seemed

unattainable a decade, even five years ago. But one can see how introducing

even a little uncertainty and information asymmetry immediately makes AI

developers' work far harder and resource-intensive. Poker, though it's

extremely difficult to play well, is, after all, a game with well-defined

rules. How much artificial brainpower, and what unfathomable shortcuts,

will be required to excel in a game with few or no rules -- like a

business negotiation, or, at the extreme, a process like the Syria peace talks?

Humans are used to situations in which rules develop in real time. No existing

machine -- and, judging by the state of the art, none that will be developed in

a near future -- can come close to our confidence in dealing with

uncertainty and imperfect information.

Machines play an important role in eliminating routine

jobs. What we're seeing with recent AI developments is the expansion of how we

define "routine" to most processes with clear rules. Even the more

complex of such processes -- like multi-player poker -- may turn out to be

economically inefficient to automate. But processes without defined rules

appear to be beyond the realm of the practical. To be safe from machines, we

humans need to seek out such situations and learn to excel in them.

This column does not necessarily reflect the opinion of

the editorial board or Bloomberg LP and its owners.

Leonid Bershidsky is a Bloomberg View columnist. He was

the founding editor of the Russian business daily Vedomosti and founded the

opinion website Slon.ru

BLOOMBERG

Related Topics: