Robots threaten US jobs

Comment on this story
IOL br Aethon Robot A self-navigating TUG robot made by Aethon performing transport duty in a hospital.

Who needs an army of lawyers when you have a computer? When Minneapolis attorney William Greene faced the task of combing through 1.3 million electronic documents in a recent case, he turned to a so-called smart computer programme. Three associates selected relevant documents from a smaller sample, “teaching” their reasoning to the computer. The software’s algorithms then sorted the remaining material by importance.

“We were able to get the information we needed after reviewing only 2.3 percent of the documents,” Greene, a Minneapolis-based partner at law firm Stinson Leonard Street, said.

Artificial intelligence has arrived in the American workplace, spawning tools that replicate human judgments that were too complicated and subtle to distill into instructions for a computer. Algorithms that “learn” from past examples relieve engineers of the need to write out every command.

The advances, coupled with mobile robots wired with this intelligence, make it likely that occupations employing almost half of today’s US workers, ranging from loan officers to cab drivers and real estate agents, become possible to automate in the next decade or two, according to a study done at the University of Oxford in the UK.

“These transitions have happened before,” said Carl Benedikt Frey, the co-author of the study and a research fellow at the Oxford Martin Programme on the Impacts of Future Technology. “What’s different this time is that technological change is happening even faster, and it may affect a greater variety of jobs.”

It’s a transition on the heels of an information-technology revolution that’s already left a profound imprint on employment across the globe. For both physical and mental labour, computers and robots replaced tasks that could be specified in step-by-step instructions – jobs that involved routine responsibilities that were fully understood.

That eliminated work for typists, travel agents and a whole array of middle-class earners over a single generation.

Yet even increasingly powerful computers faced a mammoth obstacle: they could execute only what they’re explicitly told. It was a nightmare for engineers trying to anticipate every command necessary to get software to operate vehicles or accurately recognise speech. That kept many jobs in the exclusive province of human labour – until recently.

Oxford’s Frey is convinced of the broader reach of technology now because of advances in machine learning, a branch of artificial intelligence that has software “learn” how to make decisions by detecting patterns in those humans have made.

The approach has powered leapfrog improvements in making self-driving cars and voice search a reality in the past few years. To estimate the impact that will have on 702 US occupations, Frey and colleague Michael Osborne applied some of their own machine learning.

They first looked at detailed descriptions for 70 of those jobs and classified them as either possible or impossible to computerise. Frey and Osborne then fed that data to an algorithm that analysed what kind of jobs may lend themselves to automation and predicted probabilities for the remaining 632 professions.

The higher that percentage, the sooner computers and robots will be capable of stepping in for human workers. Occupations that employed about 47 percent of Americans in 2010 scored high enough to rank in the risky category, meaning they could be possible to automate “perhaps over the next decade or two”, their analysis, released in September, showed.

“My initial reaction was, wow, can this really be accurate?” said Frey, who’s a PhD economist. “Some of these occupations that used to be safe havens for human labour are disappearing one by one.”

Loan officers are among the most susceptible professions, at a 98 percent probability, according to Frey’s estimates. Inroads are already being made by Daric, an online peer-to-peer lender partially funded by former Wells Fargo chairman Richard Kovacevich. Begun in November, it doesn’t employ a single loan officer. It probably never will.

The start-up’s weapon: an algorithm that not only learned what kind of person made for a safe borrower in the past, but is also constantly updating its understanding of who is creditworthy as more customers repay or default on their debt.

It’s this computerised “experience”, not a loan officer or a committee, that calls the shots, dictating which small businesses and individuals get financing and at what interest rate. It doesn’t need teams of analysts devising hypotheses and running calculations because the software does that on massive streams of data on its own.

The result: An interest rate that’s typically 8.8 percentage points lower than from a credit card, according to Daric.

“The algorithm is the loan officer,” said Greg Ryan, the 29-year-old chief executive of the California company that consists of him and five programmers. “We don’t have overhead, and that means we can pass the savings on to our customers.”

Similar technology is transforming what is often the most expensive part of litigation, during which attorneys pore over e-mails, spreadsheets, social media posts and other records to build their arguments.

Each lawsuit was too nuanced for a standard set of sorting rules, and the string of keywords lawyers suggested before every case still missed too many smoking guns. The reading got so costly that many law firms farmed out the initial sorting to lower-paid contractors.

The key to automate some of this was the old adage to show not tell – to have trained attorneys illustrate to the software the kind of documents that make for gold.

Programmes developed by companies such as San Francisco-based Recommind then run massive statistics to predict which files expensive lawyers shouldn’t waste their time reading. It took Greene’s team of lawyers 600 hours to get through the 1.3 million documents with the help of Recommind’s software. That task, assuming a speed of 100 documents per hour, could take 13 000 hours if humans had to read all of them.

“It doesn’t mean you need zero people, but it’s fewer people than you used to need,” Daniel Martin Katz, a professor at Michigan State University’s College of Law in East Lansing who teaches legal analytics, said. “It’s definitely a transformation for getting people that first job while they’re trying to gain additional skills as lawyers.”

Smart software is transforming the world of manual labour as well, propelling improvements in autonomous cars that make it likely machines can replace taxi drivers and heavy truck drivers in the next two decades, according to Frey’s study.

One application already here: Aethon’s self-navigating TUG robots that transport soiled linens, drugs and meals in now more than 140 hospitals predominantly in the US. When Pittsburgh-based Aethon first installs its robots in new facilities, humans walk the machines around.

It would have been impossible to have engineers pre-programme all the necessary steps, according to chief executive Aldo Zini.

“Every building we encounter is different,” Zini said. “It’s an infinite number” of potential contingencies and “you could never ahead of time try to programme everything in. That would be a massive effort. We had to be able to adapt and learn as we go.”

To be sure, employers wouldn’t necessarily replace their staff with computers just because it became technically feasible to do so, Frey said. It could remain cheaper for some time to employ low-wage workers than invest in expensive robots. Consumers may prefer interacting with people than with self-service kiosks, while government regulators could choose to require human supervision of high-stakes decisions.

Even more, recent advances still don’t mean computers are nearing human-level cognition that would enable them to replicate most jobs. That’s at least “many decades” away, according to Andrew Ng, the director of the Stanford Artificial Intelligence Laboratory near Palo Alto, California.

Machine-learning programmes are best at specific routines with lots of data to train on and whose answers can be gleaned from the past. Try getting a computer to do something that’s unlike anything it’s seen before, and it just can’t improvise. Neither could machines come up with novel and creative solutions or learn from a couple examples the way people could, Ng said.

“This stuff works best on fairly structured problems,” said Frank Levy, a professor emeritus at the Massachusetts Institute of Technology in Cambridge, who has extensively researched technology’s impact on employment. “Where there’s more flexibility needed and you don’t have all the information in advance, it’s a problem.”

That means the positions of Greene and other senior attorneys, whose responsibilities range from synthesising persuasive narratives to earning the trust of their clients, won’t disappear for some time.

Less certain are prospects for those specialising in lower-paid legal work like document reading, or in jobs that involve other relatively repetitive tasks.

As more of the world gets digitised and the cost to store and process that information continues to decline, artificial intelligence will become even more pervasive in everyday life, Stanford’s Ng says.

“There will always be work for people who can synthesise information, think critically, and be flexible in how they act in different situations,” Ng said, also co-founder of online education provider Coursera. Still, he said: “the jobs of yesterday won’t [be] the same as the jobs of tomorrow.”

Workers will likely need to find vocations involving more cognitively complex tasks that machines can’t touch. Those positions also typically required more schooling, Frey said. “It’s a race between technology and education.” – Bloomberg



sign up
 
 

Comment Guidelines



  1. Please read our comment guidelines.
  2. Login and register, if you haven’ t already.
  3. Write your comment in the block below and click (Post As)
  4. Has a comment offended you? Hover your mouse over the comment and wait until a small triangle appears on the right-hand side. Click triangle () and select "Flag as inappropriate". Our moderators will take action if need be.