Wednesday, October 20

The path of science to turn a computer into a brain | Trends

The underlying questions are: are a computer and a brain alike? Will we be able to replicate a human brain? Will we get a machine to think like a person? And if we succeed, will a consciousness arise from there, a mind? A human brain is a piece of matter weighing a kilo and a half, but it is the most complex and perhaps the most fascinating object in the universe. The only one, in fact, who tries to understand the universe. Will he be able to understand himself?


The first notion that leads us to compare a computer with a brain is that both entities are capable of storing and processing information. For example, both can do mathematical calculations, although the computer does them much faster than a human being. In general, computers are faster at performing operations that can be broken down into a series of simple steps (algorithms). But brains are far superior in more complex functions: creativity, the development of emotions, in short, everything that makes us human.

We can also find vague similarities in their design: if a computer works with transistor circuits through which electricity travels, inside our skull there is a very complex circuit of neurons (there are 1011 in each brain, as many as there are stars in the Milky Way, and 1015 synapses) through which electrochemical signals circulate much slower: in addition to electrical impulses, chemical neurotransmitters are used.

Neural synapses are more complex than electronic logic gates. The set of neural connections is called a connectome: the Salk Institute has calculated that the brain’s storage capacity, given the number of connections, is in the order of petabytes (a petabyte is 1 billion megabytes, about 6.7 million megabytes). music discs in MP3 format). Regarding processing speed, a neuron works at one kilohertz, a million times slower than a processor in a smartphone, which can work in the order of gigahertz. It is the reason why silicon processors do faster mathematical calculations. Transistors and neurons are therefore very different: the brain is not digital, it does not work with ones and zeros, but is analog.

“The brain is a machine capable of performing very complex operations in a highly efficient way: it only uses 50 watts of power, less than a light bulb on the bedside table”, explains Francisco Clascá, professor of human anatomy and embryology at the Autonomous University de Madrid (UAM) and a student of the axionic networks of the brain involved in functions such as attention, consciousness and intentional movement. “To perform functions similar to those performed by a brain, a supercomputer (for example, the Mare Nostrum in Barcelona) would require huge amounts of energy”, adds the professor. “It’s amazing how much a brain does with very little.”


Some projects have tried to simulate a brain. It is the case of Blue Brain, initiated by IBM and the Polytechnic School of Lausanne, or the Human Brain Project (HBP), a project flagship of the European Union, whose third phase ends in 2023, which brings together scientists from many disciplines in order to better understand the brain, learn from it and even simulate it. To represent a large-scale brain or one of its parts, it is necessary to have a map of the connectome (the set of connections), know its dynamics (in mathematical form) and have a very powerful computer.

Having a simulation of the brain, even partial, can help to understand the fundamentals of its functioning, to understand certain diseases or to develop drugs. Computational neuroscience is the discipline that tries to virtually simulate the neural networks of our brain and their interactions, using computer and mathematical models. Neuromorphic computing (silicon brains) tries to simulate neural connections not in a computer, but physically, with tangible circuits. It also serves, on the one hand, to understand how the brain works and, on the other, to improve technology.

“Any brain is a model in which to draw inspiration and learn from what evolution has done for millions of years, from the most capable, most resistant, most efficient solutions,” says Clascá. The brain has also been defined by researchers such as the psychologist Gary Marcus as a kluge, an acronym in English for the words “clumsy, lame, ugly, but quite good”: it has imperfections, it is full of patches and fixes, because it doesn’t It has been designed, but is the fruit of the hazards of evolution. But it works well, precisely because it is the result of natural selection, an improvement that has occurred over millions of years. A computer, on the other hand, is completely designed by humans to perform its functions in the most efficient way possible.


One of the most unique and amazing abilities of the brain is to learn quickly. Machines may be easier to multitask, but they struggle to learn on their own. Artificial neural networks, part of artificial intelligence, try to emulate these capabilities, developing the discipline of machine learning (machine learning). “Neurons have several dendrites through which they capture information and then an axon through which they emit signals,” explains Javier De Felipe, neurobiologist at the Ramón y Cajal Institute (CSIC) and director of Blue Brain in Spain. “Artificial neural networks try to emulate these systems virtually, in computers, with several layers of neurons.” (An important difference between brains and computers is that one transistor connects to two or three others; a neuron in the cerebral cortex can be connected to hundreds or thousands of other neurons.) When there are a large number of layers, through which the processing of information becomes more complex, we speak of deep learning. Through these techniques, advances are made in the recognition of voice, images or facial emotions, and in computerized vision. Functions that a human brain performs without problems are very complicated for a machine.


Trying to reduce a brain completely to a computer is what is known as reductionism. It can give a simple idea of ​​its operation, but very incomplete. “The key difference is complexity: in the human brain there are about a million more synaptic connections and different levels of complexity,” explains Luis Pastor, professor at the Rey Juan Carlos University of Madrid. These levels of complexity range from molecules to neurons, networks or brain areas: it is a very complicated organ regardless of where you look and at the level of detail that is examined. For a large-scale simulation of the brain, as Pastor explains, we would need computers with more computing and storage capacity than are available. “The analysis of the large amount of data that would be obtained would also be a problem,” he emphasizes.

The concept of wetware (something like wet software) tries to computationally approximate what a brain is. It is not software, ni hardware, but a third thing: the wetware. This “humidity” refers to brain plasticity, to the changing neural connections, far from the rigidity of computers, to the aforementioned capacity to adapt and learn. “Although a brain processes signals like a computer, it does not use silicon chips but neurons, which are connected in networks that interact dynamically,” says Clascá. The brain is always changing, and perhaps this neuroplasticity is the main characteristic when it comes to differentiating it from a computer. For example, when a brain is injured in one of its parts, it can learn to function differently. Over the years we lose a large number of neurons, but the system endures wear and tear.


In the field of philosophy, the Computational Theory of Mind, formulated by thinkers like Hillary Putnam and Jerry Fodor. This theory considers that the mind is physically based on brain activity and is functionally equivalent to a computer, that is, to a symbol processing machine that follows rules sequentially.

For Fodor, the mind is modular, with different parts dedicated to music, mathematics or language. “These faculties,” wrote Fodor, “operate by means of abstract algorithms, just like computers.” These abstract computers would be inspired by the universal machines of Alan Turing, a computer pioneer, who anticipated what a computer would be. A machine that manipulates symbols following certain rules, algorithms, regardless of the physical mechanism in which it resides. Turing himself wondered if machines could ever think.

In these philosophical fields, the questions accumulate: if we simulated a brain, would a mind arise? Was he aware? It is about the mind-brain problem, which consists of knowing if both entities are two different things or the same, and how they are connected to each other. He has faced philosophers for centuries. Is the mind an emergent property of the brain, like the collective mind of an anthill, an epiphenomenon of neural activity? The whole would then be more than the sum of the parts. If consciousness, mind, self, were a side effect of brain activity, something like an unexpected mistake, it would explain the sense of absurdity and nonsense that we experience while being alive. Could the consciousness of a machine emerge from this epiphenomenon, as in some science fiction movies? All are riddles.

What we know so far is this: a computer is completely understandable, it is not for nothing that it is the product of the human mind. A brain is the most complex object in the universe. “As of today, we are not sure that we can understand everything,” concludes De Felipe.

Leave a Reply

Your email address will not be published. Required fields are marked *