Neuromorphic Computing — A Change in the paradigm

Joe Addamo
11 min readJun 1, 2020

--

There is a paradigm change on the horizon for computing. Substantial research in this field has led to the realization that there is actually a way to store data in the same location that it is processed, much like how the human brain works. This has sparked a shift from traditional computing to what’s known as Neuromorphic Computing. The driving force of Neuromorphic Computing is to figure out how the human brain works and how to emulate that process (Fulton III). “We can’t exactly grow brains in jars yet. But if we have a plausible theory of what constitutes cognition, we can synthesize a system that abides by the rules of that theory, perhaps producing better results using less energy and requiring an order of magnitude less memory” (Fulton III).

In a time where humans are effectively in a golden age for technological advancement, you may ask: why do we need to change the way our machines compute? The reason is that Traditional computing is beginning to reach its limit. As it stands, the eruption of technology and data use/manipulation gives its credence to Moore’s Law (Vangie Beal, Moore’s Law). Gordon Moore, the co-founder of Intel, detected that the doubling of transistors in dense circuits occurs every two years, and will continue to do so until around 2020–2025; this is what’s known as Moore’s Law (Vangie Beal, Moore’s Law). Scientists have commonly agreed that a crisis will follow in which the computational power and the decrease in cost will no longer hold true. The cease of this power to low energy cost ratio has been dubbed as Moore’s Crisis (Valle, 1).

Remember the previous statement about how the driving force of Neuromorphic Computing was to emulate the process of the human brain? The reason for this is because our brains are way more adept at distinguishing patterns than even the most powerful supercomputers (Valle, 2). For example, “While the brain of a chess player uses of the order of 20 watts, a supercomputer requires a million times more to face the same challenge” (Valle, 2). It’s easy to see by that example of why it’s crucial to take a new approach towards computing. If we can compute in a neuromorphic way (the concept of emulating the processing of the brain), we will undoubtedly be able to create machines with the power to energy consumption ratio equal to (if not far greater than) our own human cortex (Valle, 2).

To fully understand why traditional computing falls short where neuromorphic computing excels, we need to understand how it works. Traditional computational machines are based on an architecture known as the Turing-von Neumann (TvN) paradigm (Valle, 1). This type of structure involves several components that are responsible for storing input/output data in a memory unit, transferring that data into a processing unit (CPU), and running instructions and mathematical operations on that data within the CPU, and an input/output device that facilitates the interface between the operator and the external world (Valle, 1). This data is represented in machine code, which is a series of bits in 0’s and 1’s. The representation of data within digits of 0, 1 is known as binary.

Because computers think and operate in binary, there is a limit to the range of questions we can ask, making them extremely rigid. Also, there is an inescapable energy tax that is applied when transferring data between the memory unit where it is stored, and the CPU where it is manipulated. This is exactly what Moore’s crisis is referring to, and what neuromorphic computing is aiming to solve. The vast majority of electronic computers today are stored-program machines, that is, they keep data and program instructions in common storage space. As long as data is stored separately from where it’s processed, Moore’s crisis is inevitable (Valle, 1).

Neuromorphic computing seeks to bypass the separation of where the data is stored versus manipulated by storing the data in the same location as it is processed. This is done through neuron-based models. In a Neuromorphic chip, neurons talk to each other in ways that mimic the biological brain, using precise electrical currents to flow between the synapse of two neurons. This allows the transmission of a gradient of understanding to work simultaneously, as opposed to the traditional option of just Yes and No (or its binary representation of 0, and 1). Scott Fulton III better explains this process in his article, “A neural network in computing is typically represented by a set of elements in memory — dubbed axons, after their counterparts in neurology — that become adjusted, or weighted, in response to a series of inputs. These weights are said to leave an impression, and it is this impression that (hopefully) a neural net can recall when asked to reveal the common elements among the inputs. If this impression can be treated as “learning,” then a small neural net may be trained to recognize letters of the alphabet after an extensive training”. This also gives way to Machine Learning, and Deep Learning, in which neurons are able to recognize patterns and make predictions on how to carry out specific tasks without being explicitly instructed (Fulton III).

If we break down computing and artificial intelligence (intelligence demonstrated by machines, also known as AI) into three generations, the direction it’s going and the requirements it needs to get there become clearer. In the very first generation, AI was heavily based on “rules-based and emulated classical logic” to interpret and determine a limited range of problems. The second generation, which is where we are currently, has moved from monitoring and analyzing data to sensing and perception via deep-learning networks. What this means is instead of just being effective and crunching extraordinary amounts of numbers to solve problems, deep-learning organizes and identifies problems in algorithms in a hierarchical series of complexity. The significance of this is that it gives machines the ability to do things such as analyzing the contents of video’s or images. The next generation aims to reach areas of human cognition through the use of AI . This step is crucial to overcoming the hindrance of literal problem-solving which AI is currently enslaved by. AI solutions lack mobility because of their dependence on, “literal, deterministic views of events that lack context and common-sense understanding”. The next generation aims to implement neural network architecture that will allow AI to address “novel situations and abstractions to automate ordinary human activities.” (“Neuromorphic Computing — Next Generation of AI.”). This is why Neuromorphic engineering is essential to the future of computing.

The general consensus is that Neuromorphic chips will allow the direct implementation of neural network architecture, which in turn will offer much higher computing speeds and complexity for way less energy cost, ergo negating the impending doom of Moore’s crisis. So, what is standing in the way of this monumental technological development? Shouldn’t every piece of new tech have a neuromorphic chip as its processor by now? There are quite a few obstacles standing in the way with fully integrating neuromorphic hardware with current technology. The best way to overcome these obstacles is to improve our current understanding of the human mind and improve the capabilities of the material required to support it’s functions.

For starters, the concept of neuromorphic engineering is a relatively new one, introduced by Carver Mead in the late 1980s (Phil, “Recent Trends in Neuromorphic Engineering”). The field itself is such an emerging field that one of the first neuromorphic chip prototypes was built in August 2014 by IBM (Phil, “Recent Trends in Neuromorphic Engineering”). Known as the TrueNorth chip, it featured a million neurons and had more in common with the human cortex than a conventional CPU (Phil, “Recent Trends in Neuromorphic Engineering”). The TrueNorth chip was a strong feat for Neuromorphic engineering, however, it lacks the hardware requirements to fully achieve the goal of a neuromorphic chip (Valle, 3). What makes a neuromorphic chip so difficult to construct is its “high hardware requirements to run artificial neural networks, both for calculation speed and power consumption”(Valle, 3).

Neural Networks require the appropriate hardware to exploit their human-like algorithmic functions of pattern recognition. The materials currently being used by the tech industries are silicon-based metals that are too weak and restrictive in comparison to what a neuromorphic chip requires to function properly. They do not focus current in a direct and controlled manner, rather, they pour a flow of electricity loosely over the neurons in an uncontrollable manner. There is some debate over which materials can be used to flow the current between two neurons adequately. The two highest regarded materials are silicon-crystalline silicon, and silicon germanium layered on top one another. A team in Korea, however, is currently using Tantalum oxide, which is more durable and gives more control over the flow of the current.

Contrary to the effort of tackling the materials obstacle, a team at the University of Manchester is using traditional computer parts like cores and routers and connecting and communicating the parts with each other in innovative ways. This is simulating the behavior of the human cortex, without needing to use cutting edge material. It is known as Spinnaker. While these approaches may seem promising, they have only been able to reach about 300 million synapses (Fulton III).

Although 300 million seems like an incredibly high number, a different approach to the hardware issue conducted by Prof. James K. Gimzewski of UCLA’s California NanoSystems Institute (CNSI) is claimed by Gimzewski to be capable of depositing the incomprehensible number of close to 1 billion synaptic interconnects per square centimeter. Gimzewski is achieving this feat by beginning his research searching for the link between nanotechnology and neurology, measuring the distinctions between the potential of electrical signals recorded between the short-term memory and long-term memory (Fulton III).

Professor Walter Freeman, a professor at UC Berkeley, supports Gimzewski’s notion by speculating that the density of the fabric of the cerebral cortex is directly connected to consciousness itself. Freeman defines this fabric as the neuropil, which is: “a thick fabric located within the neocortex that forms the organ of consciousness (the biological process through which an organism can confidently assert that it’s alive and thinking” (Fulton III).

Gimzewski gave a seminar in which he described how to build a brain using atomic switch networks. In 2014, He presented his research at the Bristol Nanoscience Symposium. He unveiled photographs depicting a grid of copper posts treated with a silver nitrate solution at a near-micron scale . The silver atoms where than exposed to gaseous sulfur, which catalyzed the forming of nano-wires between each point on the grid. These wires behaved similarly to the synapses found in the brain. Gimzewski explained, “We found that when we changed the dimension of the copper posts. We could move… to more nano-wire structures, and it was due to the fact that we can avoid some instabilities that occur on a larger scale. So, we’re able to make these very nice nano-wire structures. Here you can see, you can have very long ones and short ones. And using this process of bottom-up fabrication, using silicon technology, [as opposed to] top-down fabrication using CMOS process… we can then generate these structures… It’s ideal, and each one of these has a synthetic synapse” (Fulton III).

Gimzewski’s research proves that we may be focusing in the wrong direction with our research in Neuromorphic Engineering. His presentation was able to reveal a model capable of more than 3 times the number of precise electrical synapses as its competition. The Spinnaker took the approach of re-purposing conventional computer parts to improve the power of its output. The team in Korea believed that a new durable material known as Tantalum oxide would be the key to creating a substantially more efficient hardware chip. However, both these approaches to materials are trumped by the logic behind Gimzewski’s. Gimzewski seemed to realize that there is still much research needed to fully understand how to develop and implement such a device as a neuromorphic hardware chip, and it starts with understanding our own human brain.

Because we still do not fully understand how the human brain works, there are many scientists, researchers, and members of the field who disagree with the idea that the neuromorphic computing model is an accurate simulation of the human brain. One such person, Dr. Gerard Marx, the CEO of MX biotech ltd., a Jerusalem-based research firm falls into this camp. Marx believes that “the prevailing view of the brain as a kind of infinite tropical rain forest, where trees of neurons swing from synapses in the open breeze, is a load of hogwash. Missing from any such model, Marx points out, is a substance called the extracellular matrix (nECM), which is not a gelatinous, neutral sea but rather an active agent in the brain’s recall process. Marx postulates that memory in the biological brain requires neurons, the nECM, plus a variety of dopants such as neurotransmitters (NT) released into the nECM. Electrochemical processes take place between these three elements, the chemical reactions from which have not only been recorded but are perceived as closely aligned with emotion. The physiological effects associated with recalling a memory (e.g., raised blood pressure, heavier breathing) trigger psychic effects (excitement, fear, anxiety, joy) which in turn have a reinforcing effect on the memory itself” (Scott Fulton III).

While I myself maintain a very high level of optimism for what this new technology has to offer, I agree that Marx makes some stunningly rational points. The current model for neuromorphic computing seems quite basic, dumb-down, and shortsighted of what constitutes a human cortex. In fact, it is so rudimentary, it might be the necessary starting point for the base structure of a simulated cortex, however, there are many fundamental components that will be required to achieve perfect functionality. Aside from the fact that much of how the human brain works are still unknown to us, the materials we would need to recreate hardware to carry out simulated brain tasks are way more advanced than what is currently available to us. This poses another major obstacle for researchers as they attempt to create new material capable of facilitating the functions that need to be performed.

If we look back to Gimzewski’s research, we can see how one hypothesis of the workings of the human brain led to a tremendous improvement towards effective neuromorphic engineering. Dr. Marx accentuates just how many biological processes are involved for our brains to operate. If more researchers are able to uncover these unknown parts and processes in the same way Gimzewski did, we will be able to fully recreate the human cortex. The approaches made thus far, although have yielded results, have been nothing short of brute force. They achieve, in short, the same effect as the human brain in terms of synaptic connections, however, the overall functionality is not even close to that of its human counterpart. We cannot fully reverse engineer the cortex of the human brain without first understanding its components.

Works Cited:

del Valle, Javier, et al. “Challenges in Materials and Devices for Resistive-Switching-Based Neuromorphic Computing.” Journal of Applied Physics, vol. 124, no. 21, Dec. 2018, p. N.PAG. EBSCOhost, doi:10.1063/1.5047800.

Kim, Chul-Heung, et al. “Emerging Memory Technologies for Neuromorphic Computing.”

Nanotechnology, vol. 30, no. 3, Jan. 2019, p. 032001. EBSCOhost, doi:10.1088/1361–6528/aae975.

Phil. “Recent Trends in Neuromorphic Engineering.” Sollers Buzz,

blog.sollers.edu/data-science/recent-trends-in-neuromorphic-engineering.

III, Scott Fulton, “What Neuromorphic Engineering Is, and Why It’s Triggered an Analog Revolution.”

February 8, 2019, https://www.zdnet.com/article/what-neuromorphic-engineering-is-and-why-its-triggered-an-analog-revolution/ Accessed 2019–04–04.

“Neuromorphic Computing — Next Generation of AI.” Intel. Accessed April 16, 2019. https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html.

Ray, Tanmoy. “Demystifying Neural Networks, Deep Learning, Machine Learning, and Artificial

Intelligence,” March 29, 2018. https://www.stoodnt.com/blog/ann-neural-networks-deep- learning-machine-learning-artificial-intelligence-differences/.

Beal, Vangie. “What Is Moore’s Law? Webopedia Definition.” Accessed April 4, 2019.

https://www.webopedia.com/TERM/M/Moores_Law.html.

“Artificial Intelligence — What It Is and Why It Matters.” Accessed April 16, 2019.

https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html.

Furber, Steve & Lester, David & A. Plana, Luis & D. Garside, Jim & Painkras, Eustace & Temple, Steve &

Brown, Andrew. (2013). Overview of the SpiNNaker System Architecture. Computers, IEEE Transactions on. 62. 2454–2467. 10.1109/TC.2012.142.

Hof, Robert D. “Qualcomm’s Neuromorphic Chips Could Make Robots and Phones More Astute About

the World.” MIT Technology Review. Accessed April 16, 2019.

https://www.technologyreview.com/s/526506/neuromorphic-chips/.

--

--

Joe Addamo

Software Engineer, Computer Science Student @ Columbia University, Programming Instructor, Research Analyst, Tech-Writer, IT Professional.