Neuromorphic Computing

With the end of Moore’s law approaching and Dennard scaling ending, the computing community is increasingly looking at new technologies to enable continued performance improvements. Among these new technologies are neuromorphic computers. Neuromorphic computing was coined by Carver Mead in the late 1980s and at the time primarily refers to analogue-digital implementations of brain-inspired computing. In recent years, however, the term neuromorphic has come to encompass a broad range of hardware implementations as the field continues to evolve and large-scale funding opportunities have become available for brain-inspired computing systems, including the DARPA Synapse project and the Human Brain Project of the European Union.

Neuromorphic Computing Explained

On the other side, neuromorphic based computing environments are very promising, especially in the area of high performance computing. For a good introduction to this topic, Professor Huaqiang Wu, Tsinghua University, provides a good explanation in his video Neuromorphic computing with memristors: from device to system. This is the technical background for the Dynex platform. The Dynex Chip is a circuit design based on memristors. There exists plenty of research and literature about computing with memristive devices, this one for example is a good introduction and summary:

Memristors as alternative to CMOS-based computing systems

Memristive Devices for New Computing ParadigmsIn complementary metal–oxide–semiconductor (CMOS)-based von Neumann architectures, the intrinsic power and speed inefficiencies are worsened by the drastic increase in information with big data. With the potential to store numerous values in IV pinched hysteresis, memristors (memory resistors) have emerged as alternatives to existing CMOS-based computing systems. Herein, four types of memristive devices, namely, resistive switching, phase-change, spintronics, and ferroelectric tunnel junction memristors, are explored. The application of these devices to a crossbar array (CBA), which is a novel concept of integrated architecture, is a step toward the realization of ultradense electronics. Exploiting the fascinating capabilities of memristive devices, computing systems can be developed with novel computing paradigms, in which large amounts of data can be stored and processed within CBAs. Looking further ahead, the ways in which memristors could be incorporated in neuromorphic computing systems along with various artificial intelligence algorithms are established. Finally, perspectives and challenges that memristor technology should address to provide excellent alternatives to existing computing systems are discussed. The infinite potential of memristors is the key to unlock new computing paradigms, which pave the way for next-generation computing systems.Volume 2, Issue11, November 2020, 2000105

In-Memory computing

The term neuromorphic computer refers to non-von Neumann computers whose structure and function are influenced by biology and physics.Data and instructions are stored in the memory units of Von Neumann computers, which consist of separate CPUs and memory units. On the other hand, in a neuromorphic computer, both processing and memory are governed by neurons and synapses. Unlike Von Neumann computers, neuromorphic computers define their programs based on the structure of the neural network and the parameters of the network rather than by explicit instructions. Also, while von Neumann computers encode information as numerical values expressed in binary terms, neuromorphic computers receive spikes as input, which are encoded numerically by the associated time at which they occur, the magnitude and the shape of their output.

fundamental operational differences

As a result of the contrasting characteristics between the two architectures, neuromorphic computers offer a number of fundamental operational differences:

  • Inherently parallel operation is a characteristic of neuromorphic computers, where all neurons and synapses can potentially operate simultaneously; however, when compared with the parallelized von Neumann systems, neurons and synapses perform relatively simple computations.

  • Memory and processing are co-located: in neuromorphic hardware, there is no concept of separating memory and processing. In many implementations, neurons and synapses perform processing and store values in tandem, despite the fact that neurons are sometimes thought of as processing units and synapses as memory units. By combining the processor and memory, the von Neumann bottleneck regarding processor/memory separation is mitigated, resulting in a reduction in maximum throughput. Furthermore, this collocation reduces the need for data access from the main memory, which consumes a large amount of energy compared to compute energy.

  • Neuromorphic computers have inherent scalability since adding more neuromorphic chips increases the number of neurons and synapses. In order to run larger and larger networks, it is possible to treat multiple physical neuromorphic chips as a single large neuromorphic implementation. Several large-scale neuromorphic hardware systems have been successfully implemented, including SpiNNaker and Loihi.

  • Neuromorphic computers use event-driven computation (meaning, computing only when available data is available) and temporally sparse activity to achieve extremely high computational efficiency. There is no work being performed by neurons and synapses unless there are spikes to be processed, and typically spikes are relatively sparse in the network operation.

  • Stochasticity can be incorporated into neuromorphic computers, for instance when neurons fire, to accommodate noise.

energy efficiency and parallel performance

Neuromorphic computers are well documented in the literature, and their features are often cited as motivating factors for their implementation and utilization. An attractive feature of neuromorphic computers is their extremely low power consumption: they consume orders of magnitude less power than conventional computers. This low-power operation is due to the fact that they are event-driven and massively parallel, with only a small portion of the entire system being active at any given time. Energy efficiency alone is a compelling reason to investigate the use of neuromorphic computers in light of the increasing energy costs associated with computing, as well as the increasing number of applications that are energy constrained (e.g. edge computing applications). As neuromorphic computers implement neural network-style computations inherently, they are a natural platform for many of today’s artificial intelligence and machine learning applications. The inherent computational properties of neuromorphic computers can also be leveraged to perform a wide variety of different types of computation.


> Medium: Benchmarking the Dynex Neuromorphic Platform with the Q-Score

With the end of Moore’s law approaching and Dennard scaling ending, the computing community is increasingly looking at new technologies to enable continued performance improvements. Among these new technologies are neuromorphic computers. Neuromorphic computing was coined by Carver Mead in the late 1980s and at the time primarily refers to analogue-digital implementations of brain-inspired computing. In recent years, however, the term neuromorphic has come to encompass a broad range of hardware implementations as the field continues to evolve and large-scale funding opportunities have become available for brain-inspired computing systems, including the DARPA Synapse project and the Human Brain Project of the European Union.

Neuromorphic Computing Explained

On the other side, neuromorphic based computing environments are very promising, especially in the area of high performance computing. For a good introduction to this topic, Professor Huaqiang Wu, Tsinghua University, provides a good explanation in his video Neuromorphic computing with memristors: from device to system. This is the technical background for the Dynex platform. The Dynex Chip is a circuit design based on memristors. There exists plenty of research and literature about computing with memristive devices, this one for example is a good introduction and summary:

Memristors as alternative to CMOS-based computing systems

Memristive Devices for New Computing ParadigmsIn complementary metal–oxide–semiconductor (CMOS)-based von Neumann architectures, the intrinsic power and speed inefficiencies are worsened by the drastic increase in information with big data. With the potential to store numerous values in IV pinched hysteresis, memristors (memory resistors) have emerged as alternatives to existing CMOS-based computing systems. Herein, four types of memristive devices, namely, resistive switching, phase-change, spintronics, and ferroelectric tunnel junction memristors, are explored. The application of these devices to a crossbar array (CBA), which is a novel concept of integrated architecture, is a step toward the realization of ultradense electronics. Exploiting the fascinating capabilities of memristive devices, computing systems can be developed with novel computing paradigms, in which large amounts of data can be stored and processed within CBAs. Looking further ahead, the ways in which memristors could be incorporated in neuromorphic computing systems along with various artificial intelligence algorithms are established. Finally, perspectives and challenges that memristor technology should address to provide excellent alternatives to existing computing systems are discussed. The infinite potential of memristors is the key to unlock new computing paradigms, which pave the way for next-generation computing systems.Volume 2, Issue11, November 2020, 2000105

In-Memory computing

The term neuromorphic computer refers to non-von Neumann computers whose structure and function are influenced by biology and physics.Data and instructions are stored in the memory units of Von Neumann computers, which consist of separate CPUs and memory units. On the other hand, in a neuromorphic computer, both processing and memory are governed by neurons and synapses. Unlike Von Neumann computers, neuromorphic computers define their programs based on the structure of the neural network and the parameters of the network rather than by explicit instructions. Also, while von Neumann computers encode information as numerical values expressed in binary terms, neuromorphic computers receive spikes as input, which are encoded numerically by the associated time at which they occur, the magnitude and the shape of their output.

fundamental operational differences

As a result of the contrasting characteristics between the two architectures, neuromorphic computers offer a number of fundamental operational differences:

  • Inherently parallel operation is a characteristic of neuromorphic computers, where all neurons and synapses can potentially operate simultaneously; however, when compared with the parallelized von Neumann systems, neurons and synapses perform relatively simple computations.

  • Memory and processing are co-located: in neuromorphic hardware, there is no concept of separating memory and processing. In many implementations, neurons and synapses perform processing and store values in tandem, despite the fact that neurons are sometimes thought of as processing units and synapses as memory units. By combining the processor and memory, the von Neumann bottleneck regarding processor/memory separation is mitigated, resulting in a reduction in maximum throughput. Furthermore, this collocation reduces the need for data access from the main memory, which consumes a large amount of energy compared to compute energy.

  • Neuromorphic computers have inherent scalability since adding more neuromorphic chips increases the number of neurons and synapses. In order to run larger and larger networks, it is possible to treat multiple physical neuromorphic chips as a single large neuromorphic implementation. Several large-scale neuromorphic hardware systems have been successfully implemented, including SpiNNaker and Loihi.

  • Neuromorphic computers use event-driven computation (meaning, computing only when available data is available) and temporally sparse activity to achieve extremely high computational efficiency. There is no work being performed by neurons and synapses unless there are spikes to be processed, and typically spikes are relatively sparse in the network operation.

  • Stochasticity can be incorporated into neuromorphic computers, for instance when neurons fire, to accommodate noise.

energy efficiency and parallel performance

Neuromorphic computers are well documented in the literature, and their features are often cited as motivating factors for their implementation and utilization. An attractive feature of neuromorphic computers is their extremely low power consumption: they consume orders of magnitude less power than conventional computers. This low-power operation is due to the fact that they are event-driven and massively parallel, with only a small portion of the entire system being active at any given time. Energy efficiency alone is a compelling reason to investigate the use of neuromorphic computers in light of the increasing energy costs associated with computing, as well as the increasing number of applications that are energy constrained (e.g. edge computing applications). As neuromorphic computers implement neural network-style computations inherently, they are a natural platform for many of today’s artificial intelligence and machine learning applications. The inherent computational properties of neuromorphic computers can also be leveraged to perform a wide variety of different types of computation.


> Medium: Benchmarking the Dynex Neuromorphic Platform with the Q-Score

With the end of Moore’s law approaching and Dennard scaling ending, the computing community is increasingly looking at new technologies to enable continued performance improvements. Among these new technologies are neuromorphic computers. Neuromorphic computing was coined by Carver Mead in the late 1980s and at the time primarily refers to analogue-digital implementations of brain-inspired computing. In recent years, however, the term neuromorphic has come to encompass a broad range of hardware implementations as the field continues to evolve and large-scale funding opportunities have become available for brain-inspired computing systems, including the DARPA Synapse project and the Human Brain Project of the European Union.

Neuromorphic Computing Explained

On the other side, neuromorphic based computing environments are very promising, especially in the area of high performance computing. For a good introduction to this topic, Professor Huaqiang Wu, Tsinghua University, provides a good explanation in his video Neuromorphic computing with memristors: from device to system. This is the technical background for the Dynex platform. The Dynex Chip is a circuit design based on memristors. There exists plenty of research and literature about computing with memristive devices, this one for example is a good introduction and summary:

Memristors as alternative to CMOS-based computing systems

Memristive Devices for New Computing ParadigmsIn complementary metal–oxide–semiconductor (CMOS)-based von Neumann architectures, the intrinsic power and speed inefficiencies are worsened by the drastic increase in information with big data. With the potential to store numerous values in IV pinched hysteresis, memristors (memory resistors) have emerged as alternatives to existing CMOS-based computing systems. Herein, four types of memristive devices, namely, resistive switching, phase-change, spintronics, and ferroelectric tunnel junction memristors, are explored. The application of these devices to a crossbar array (CBA), which is a novel concept of integrated architecture, is a step toward the realization of ultradense electronics. Exploiting the fascinating capabilities of memristive devices, computing systems can be developed with novel computing paradigms, in which large amounts of data can be stored and processed within CBAs. Looking further ahead, the ways in which memristors could be incorporated in neuromorphic computing systems along with various artificial intelligence algorithms are established. Finally, perspectives and challenges that memristor technology should address to provide excellent alternatives to existing computing systems are discussed. The infinite potential of memristors is the key to unlock new computing paradigms, which pave the way for next-generation computing systems.Volume 2, Issue11, November 2020, 2000105

In-Memory computing

The term neuromorphic computer refers to non-von Neumann computers whose structure and function are influenced by biology and physics.Data and instructions are stored in the memory units of Von Neumann computers, which consist of separate CPUs and memory units. On the other hand, in a neuromorphic computer, both processing and memory are governed by neurons and synapses. Unlike Von Neumann computers, neuromorphic computers define their programs based on the structure of the neural network and the parameters of the network rather than by explicit instructions. Also, while von Neumann computers encode information as numerical values expressed in binary terms, neuromorphic computers receive spikes as input, which are encoded numerically by the associated time at which they occur, the magnitude and the shape of their output.

fundamental operational differences

As a result of the contrasting characteristics between the two architectures, neuromorphic computers offer a number of fundamental operational differences:

  • Inherently parallel operation is a characteristic of neuromorphic computers, where all neurons and synapses can potentially operate simultaneously; however, when compared with the parallelized von Neumann systems, neurons and synapses perform relatively simple computations.

  • Memory and processing are co-located: in neuromorphic hardware, there is no concept of separating memory and processing. In many implementations, neurons and synapses perform processing and store values in tandem, despite the fact that neurons are sometimes thought of as processing units and synapses as memory units. By combining the processor and memory, the von Neumann bottleneck regarding processor/memory separation is mitigated, resulting in a reduction in maximum throughput. Furthermore, this collocation reduces the need for data access from the main memory, which consumes a large amount of energy compared to compute energy.

  • Neuromorphic computers have inherent scalability since adding more neuromorphic chips increases the number of neurons and synapses. In order to run larger and larger networks, it is possible to treat multiple physical neuromorphic chips as a single large neuromorphic implementation. Several large-scale neuromorphic hardware systems have been successfully implemented, including SpiNNaker and Loihi.

  • Neuromorphic computers use event-driven computation (meaning, computing only when available data is available) and temporally sparse activity to achieve extremely high computational efficiency. There is no work being performed by neurons and synapses unless there are spikes to be processed, and typically spikes are relatively sparse in the network operation.

  • Stochasticity can be incorporated into neuromorphic computers, for instance when neurons fire, to accommodate noise.

energy efficiency and parallel performance

Neuromorphic computers are well documented in the literature, and their features are often cited as motivating factors for their implementation and utilization. An attractive feature of neuromorphic computers is their extremely low power consumption: they consume orders of magnitude less power than conventional computers. This low-power operation is due to the fact that they are event-driven and massively parallel, with only a small portion of the entire system being active at any given time. Energy efficiency alone is a compelling reason to investigate the use of neuromorphic computers in light of the increasing energy costs associated with computing, as well as the increasing number of applications that are energy constrained (e.g. edge computing applications). As neuromorphic computers implement neural network-style computations inherently, they are a natural platform for many of today’s artificial intelligence and machine learning applications. The inherent computational properties of neuromorphic computers can also be leveraged to perform a wide variety of different types of computation.


> Medium: Benchmarking the Dynex Neuromorphic Platform with the Q-Score

Copyright © 2024 Dynex. All rights reserved.

Copyright © 2024 Dynex. All rights reserved.

Copyright © 2024 Dynex. All rights reserved.