By Professor Louis C H Fourie
OVER the years annual reports and other disclosures have been opportunities for companies to communicate their financial health, to promote the culture and brand of the company and to engage with a full spectrum of stakeholders.
The processing of this wealth of information significantly affects the perception and participation of stakeholders in the business. After all, what investors think of a company is critical to its success.
These mandatory and voluntary reports, often consisting of hundreds of pages, became so extensive and complex, that it can take hours to read and analyse.
Professional investors, therefore, increasingly started to depend on Artificial Intelligence (AI) and machine learning to read these reports, to extract information from them, and to make investment recommendations. Technology has progressed to such an extent that the audience for company reporting and disclosures is no longer only humans, but a growing number of intelligent machine readers, which use machine learning and natural processing to process the information in companies’ reports.
Currently, a sizeable amount of the trading of shares are triggered by recommendations made by bots and algorithms.
A good example is Kensho (part of S&P since 2018) that developed an algorithm called Warren, named after the famous investor Warren Buffett.
The algorithm allows investors to ask complex questions in plain English and then provides answers by searching through millions of market data points, as well as the data in corporate reports.
Similarly, one of the leading hedge funds in the US, the Man Group, is managing a considerable part of its assets through the use of AI and algorithmic trading.
There is currently a major focus on the extraction of actionable meaning from the deluge of lengthy corporate reports and data. The output of the machine-read reports are currently driving many algorithmic traders, robot investment advisers, and a variety of quantitative analysts.
In October 2020, Sean Cao, Wei Jiang, Baozhong Yang and Alan Zhang published important research in this regard as a National Bureau of Economic Research (NBER) working paper under the title, How to talk when a machine is listening: Corporate disclosure in the age of AI.
The researchers analysed the mandatory reports that American public companies have to file with the Securities and Exchange Commission (SEC).
One of their major observations was the high number of machine-generated downloads of the corporate reports.
The mechanical downloads increased from 360 861 in 2003 to about 165 million in 2016, when 78 percent of all downloads seem to have been triggered by a request from a computer.
When drilling deeper, the NBER researchers further noticed that companies have carefully adjusted their language and reporting in order to achieve maximum impact with AI and algorithms that are reading the corporate reports and other disclosures.
The change in the readership of corporate reports to a predominantly machine and AI readership compelled companies to publish reports that are more friendly to machine analysis and processing.
The researchers further discovered a positive correlation between expected high machine downloads and the management of textual sentiment and audio emotion to optimally cater for machine and AI readers.
Machine readability, determined by the ease with which the algorithm analyses and processes the reported information, has become a key factor in writing company reports. For instance, a table in the usual annual report of a company might have a low readability score since the machine cannot recognise it as a table due to its particular formatting.
However, the same table could achieve a high readability score if the authors make effective use of tagging.
But companies are currently going much further than merely making provision for machine readability in their reporting.
They also attempt to adjust the sentiment and tone of their reports to subtly influence the algorithmic “readers” to come to favourable conclusions regarding the content and company and thus recommend the company for investment.
This is done by avoiding words that are perceived negatively in the criteria given to AI algorithms or are labelled as negative in the finance-specific dictionary. Companies also adjust their words to refrain from litigation-related or uncertain terminology, as well as demonstrating to little or too much confidence.
But serious investors do not only use AI algorithms to read and analyse annual reports, but also use them in the regular reporting conference calls of company executives with analysts and investors.
Investing houses are increasingly using voice analysis software to identify vocal patterns and emotions in the commentary of reporting executives. Due to this new trend, companies are selecting their words very carefully and started to adjust the tones of voice used in the conference calls with analysts to sound measurably more positive and excited.
This use of technology by investment houses and analysts is understandable in the highly competitive investment industry and so is the reaction of the reporting companies who wants a positive rating and more investment.
However, by ceding more and more choices to algorithms and outsiders, we are inexorably opening ourselves to manipulation.
When corporate reports are predominantly being written for and read by machines that make important investment decisions that could have a major impact on companies and even the stock exchange, we need to do some critical thinking. If these intelligent machines are manipulated by carefully prepared reports, it could eventually have disastrous effects on the the stock exchange and market.
Technological momentum is a very powerful force, but it can pull us along senselessly in its slipstream. We, as humans, will have to consciously accept responsibility for the design and use of technologies in the corporate, financial and investment world. Otherwise, the road to the future may be rather slippery.
Professor Louis C H Fourie is a futurist and technology strategist