Tech News: Artificial Intelligence and the human brain
Humans have always dreamt of soaring the sky and since prehistoric times often attempted to fly by mimicking birds.
They attached wings made of feathers or light weight wood to their arms, but the results were often catastrophic since the muscles of the human arms are not at all like the muscles of a bird. Due to their lack of the understanding of physics, mimicking the flight mechanics of birds did not provide a solution.
It was only in the late 1700s when Sir George Cayley and later in 1891 and 1903 when the German engineer, Otto Lilienthal, and the American Wright brothers respectively studied aerodynamics that humans succeeded in flying.
Artificial Intelligence (AI) mimicking the human brain
Very similarly, the human brain and neuroscience have been the main inspiration for AI researchers for many decades. The fields of neuroscience and AI have a long and entangled history and much AI research and algorithms are based on the cognition mechanisms of the human brain. Some AI endeavours achieved encouraging results, such as the work of DeepMind, the Alphabet (Google) subsidiary.
This notion of mimicking the human brain builds on a long tradition, dating back to the research of the Spanish anatomist and Nobel laureate Santiago Ramón y Cajal in the 19th century when he microscopically studied and sketched the thousands of neurons consisting of treelike dendrites and axons.
In 1943, the psychologist Warren McCulloch and his mentee Walter Pitts, a homeless teenage mathematics genius, proposed an interesting framework for the encoding of complex thoughts by the brain. According to them, each neuron logically combined multiple inputs in a single binary output, namely true or false. Together these logical operations could be combined into words, sentences, and paragraphs of cognition.
Although it later became apparent that the McCullough and Pitts’ model does not describe the brain very well, it played a crucial role in the binary architecture of the first modern computer and eventually evolved into artificial neural networks now commonly used in deep learning.
In 2009 an over-confident Israeli neuroscientist and founding director of the Blue Brain Project, Henry Markram, made an ostentatious proposal that within ten years he would build a complete simulation of the human brain inside a supercomputer. He has spent years mapping the cells in the neocortex, the presumed centre of perception and thought, and declared that he would soon create a virtual brain in silicon from which AI would organically emerge.
Evasive complexity of cognition
Although mimicking or recreating human cognitive functions based on the neural architecture of the brain may seem theoretically reasonable, it has proven to be an extremely challenging task, partly because neuroscientists are struggling to fully understand the cognitive mechanisms of the human brain.
It is not difficult to understand that artificial neural networks have been inspired by the neural architecture of the human brain, but when we go beyond the obvious to the relationship between currently popular deep learning models and neuroscience, it becomes incredibly complex.
Most of the current research in neural networks has been limited to emulate the synaptic connections between neurons in the cortex of the brain.
However, the quest for reconstructing cognitive capabilities of the brain in deep neural networks remains one of the elusive goals of AI.
Even if scientists succeed to re-create intelligence by carefully simulating every molecule in the brain, they still would not have found the underlying principles of cognition. Scientists need to understand the brain on a systems neuroscience-level, namely the algorithms, architectures, functions, and representations it utilises.
Alternative approaches to AI
Despite much research, the formation of knowledge in the human brain is still a murky area. In addition to the connection between neurons, many different cognitive skills complement the capturing and development of knowledge.
In more recent AI research, a new generation of AI techniques has thus started to recreate some of these cognitive functions of the human brain.
The new neuroscience-inspired approach to AI science differs fundamentally from neuromorphic computing systems closely mimicking or reverse engineering human neural circuits. By focusing on the computational and algorithmic levels, the new neuroscience-inspired approach gains transferable insights into the general mechanisms of brain function.
Some recent developments in AI that are guided by neuroscientific considerations are:
Attention: Attentional mechanisms that let humans focus on a specific task have become a recent source of inspiration for deep learning models such as convolutional neural networks (CNNs) or deep generative models. The new CNN models enable AI to ignore irrelevant information, for example, when classifying objects in a picture or in machine translation.
Continual learning: Unlike human beings that retain previous knowledge when learning new tasks, AI neural networks suffer from “catastrophic forgetting” when overwriting previous configurations in successive optimising tasks. Recent deep learning techniques based on human continual learning techniques is known as Elastic Weight Consolidation (EWC). This technique entails the slowing down of learning in a subset of network weights and anchoring them to previous solutions. The EWC algorithm enables deep Reinforcement Learning (RL) networks to learn continuously on a large scale.
Episodic memory: Episodic memory is the rapid encoding of autobiographical events in memory such as places or events (one-shot learning) and is mostly associated with neural circuits in the medial temporal lobe, in particular the hippocampus. This has inspired AI scientists to integrate episodic memory into Reinforcement Learning (RL) algorithms, such as the selection of actions based on the similarity between current situational input and experiences or events (e.g. actions and reward outcomes) previously stored in memory.
Inference: Humans are known for their ability to efficiently learn new concepts through inductive references to prior knowledge. Until now deep learning systems relied on very large amounts of training data to master specific tasks. However, recent research into structured probabilistic methods and deep generative models have included brain-inspired inference mechanisms in AI programming. AI can now make inferences about a new concept even in the case of limited data and can generate new samples from a single example concept. Also building onto the inference abilities of the human brain is the rapidly advancing field of meta-learning.
Imagination and planning: The consciousness of humans entails the ability to think about and predict the future. In contrast most deep learning systems operate reactively and lacks the ability to plan for longer term outcomes. New AI research have thus introduced architectures that can generate temporally consistent sequences that simulates the geometric layout of newly experienced realistic environments in parallel to the function of the hippocampus by combining multiple components to produce an imagined experience that is spatially and temporally coherent.
Working memory: Human intelligence has the ability to maintain and manipulate information in the working memory (an active store), mostly related to the prefrontal cortex and interconnected areas. AI research has built on these models, by creating architectures that explicitly maintain information over time such as in long-short-term memory (LSTM) networks and differential neural computers (DNCs) allowing the network controller to perform a wide range of complex memory and reasoning tasks such as determining the shortest route through a graph-like structure or map.
The future of AI
From the above, it seems that the convergence of AI and neuroscience research will lead to some very interesting AI developments in the future. Modern neural networks will go way beyond the mere connections between neurons and will start reconstructing the core building blocks of human intelligence.
AI systems now match human performance in exigent object recognition tasks and even outperform experts in dynamic, adversarial environments such as video, board and imperfect information games. Machines can also autonomously create synthetic natural images and simulations of human speech that are incredibly accurate, translate multiple languages, and produce art in the style of well-known painters.
It is even possible that AI models in future will not mimic the brain at all. In fact, airplanes fly although they bear little resemblance to birds. The solution to the human desire to fly was not to have wings like birds. As we carefully in future decipher the details of how intelligence operates in the human brain, we will hopefully realise that we are currently only describing the emperor’s clothes in the absence of the emperor. But we will know the emperor when we see him whatever clothing he may be wearing.
The brain has always fascinated us as human beings and will still fascinate us for quite some time until we can successfully recreate it. However, the creation of human-level general AI (or “Turing-powerful” intelligent systems), the nature of creativity, dreams and consciousness currently remain elusive mysteries.
Professor Louis C H Fourie is a futurist and technology strategist.