Dynex
Dynex Neuromorphic Chip
A Dynex machine is a class of general-purpose computing machines based on memory systems, where information is processed and stored at the same physical location. We analyze the memory properties of the Dynex machine to demonstrate that they possess universal computing power—they are Turing- complete—, intrinsic parallelism, functional polymorphism, and information overhead, namely that their collective states can support exponential data compression directly in memory through their collective states. Moreover, we show that the Dynex machine is capable of solving NP-complete problems in polynomial time, just like a non-deterministic Turing machine. The Dynex machine, however, requires only a polynomial number of memory cells due to its information overhead. It is important to note that even though these results do not prove NP=P within the Turing paradigm, the concept of Dynex machines represents a paradigm shift from the current von Neumann architecture, bringing us closer to the concept of brain-like neural computation.
Since Alan Turing invented his ideal machine in 1936, mathematicians have been able to develop this concept into what is now known as computational complexity theory18, a powerful tool essentially employed to determine how long does an algorithm take to solve a problem with given input data. It is now known as the universal Turing machine (UTM) and serves as the conceptual foundation for all of today’s digital computers. The practical realization of a UTM is commonly done using the von Neumann architecture, which apart from some inconsequential details, it can be viewed as a device that requires a central processing unit (CPU) that is physically separate from the memory. The CPU contains both the control unit that directs the operation of the machine, as well as the logic gates and arithmetic functions needed for execution (arithmetic/logic unit). A large amount of data must be transferred between the CPU and the memory, thereby limiting the machine’s performance both in terms of time (von Neumann bottleneck20) and energy consumption21.
While parallel computation mitigates some of these problems, it does not resolve them: several processors manipulate portions of the whole data, using a physical “closed” memory. As a result, all the processors will eventually have to communicate with each other in order to solve the whole problem, still requiring a substantial amount of data transfer between them and their memory. A fundamentally new method of manipulating and storing data would be required in order to overcome this “information latency issue”.
entire computation in memory
Recent research has proposed a new computing paradigm, inspired by the operations of our own brain, that does not rely on the UTM concept and puts the entire computation in the memory. The paradigm is known as memcomputing. In the same way as the brain, memcomputing machines would compute in memory without the requirement of a separate processor. The memory allows learning and adaptive capabilities23,24, bypassing broken connections and self-organizing the computation into the solution path25,26, much like the brain is able to sustain a certain amount of damage and still operate seamlessly. In practice, memcomputing can be implemented by utilizing physical properties of many materials and systems that exhibit a degree of time non-locality (memory) in their response functions at particular frequencies.
This type of machine has been mathematically proven to have the same computational power as a non-deterministic Turing machine, but unlike the latter, it is fully deterministic and, as such, can be constructed. These three properties are responsible for its computational power: intrinsic parallelism – interacting memory cells change their states simultaneously and collectively when performing computations; functional polymorphism – the same interacting memory cells can calculate different functions based on the signals applied; and finally information overhead – memory cells interacting can store a quantity of information in a manner that is not directly proportional to the number of memory cells.
This property is derived from a different type of architecture: the topology of this architecture is described by a network of interacting memory cells, and its dynamics are described by a collective state that is capable of storing and processing information simultaneously. Collective states are similar to the collective (entangled) states of many qubits in quantum computation, where the entangled state can be used to solve certain types of problems efficiently.
Memprocessor
In a Dynex architecture, memprocessors are the basic building blocks. We define a memprocessor as an object defined by the fourtuple (x, y, z, σ) where x is the state of the memprocessor, y is the array of internal variables, z the array of variables that connect from one memprocessor to other memprocessors, and σ an operator that defines the evolution
σ[x, y, z] = (x‘ , y‘).
When two or more memprocessors are connected, we have a network of memprocessors (computational memory). In this case we define the vector x as the state of the network (i.e., the array of all the states xi of each memprocessor), and z = ∪izi the array of all connecting variables, with zi the connecting array of variables of the memprocessor i-th.
Let zi and zj be respectively the vectors of connecting variables of the memprocessors i and j, then if zi∩zj != ∅ we say that the two memprocessors are connected. Alternatively, a memprocessor is not connected to any other memprocessor (isolated) when we have z = z(x, y) (i.e., z is completely determined by x and y) and
σ[x, y, z(x, y)] = (x, y),
which means that the memprocessor has no dynamics. A network of memprocessors has the evolution of the connecting variables z given by the evolution operator Ξ defined as
Ξ[x, y, z, s] = z‘
where y = ∪iyi and s is the array of the external signals that can be applied to a subset of connections to provide stimuli for the network. Finally, the complete evolution of the network is defined by the system
σ[x1, y1, z1] = (x‘1 , y‘1 ) .. .
σ[xn, yn, zn] = (x‘n , y‘n ) Ξ[x, y, z, s] = z‘.
The evolution operators σ and Ξ can be interpreted either as discrete or continuous evolution operators. The discrete evolution operator interpretation includes also the artificial neural networks31, while the continuous operator interpretation represents more general dynamical systems.
Dynex machine
An ideal Dynex machine consists of an interconnected bank of memory cells (memprocessors) capable of performing either digital (logic) or analog (functional) operations under the control of a control unit. The computation with and in memory can be illustrated as follows. When two or more memprocessors are connected, a signal sent by the control unit causes the memprocessors to change their internal states according to both their initial states and the signal, producing intrinsic parallelism as well as functional polymorphism.
We define the Dynex machine as the eight-tuple
Dynex machine = (M, ∆,P, S, Σ, p0, s0, F),
where M is the set of possible states of a single memprocessor. It can be either a finite set Md (digital regime), a continuum or an infinite discrete set of states Ma (analog regime), thus M can be expressed as M = Md OR Ma. ∆ is a set of functions
δα : Mmα \F × P → Mm‘α × P2 × S
where mα < ∞ is the number of memprocessors used as input of (read by) the function δα, and m‘α < ∞ is the number of memprocessors used as output (written by) the function δα; P is the set of the arrays of pointers pα that select the memprocessors called by δα and S is the set of indexes α; Σ is the set of the initial states written by the input device on the computational memory; p0 ∈ P is the initial array of pointers; s0 is the initial index α and F ⊆ M is the set of final states.
Note that the two important features of the Dynex machine, namely parallelism and polymorphism, are clearly embedded in the definition of the set of functions δα. Indeed the Dynex machine, unlike the UTM, can have more than one transition function δα (functional polymorphism), and any function δα simultaneously acts on a set of memprocessors (intrinsic parallelism). The Dynex machine also differs from the UTM in that it does not distinguish between machine states and symbols recorded on tape. It is rather the states of the memprocessors that encode this information. It is imperative to have this component in order to build a machine that is capable of storing data and performing computations simultaneously.
As another important point, unlike a UTM, which has a finite number of discrete states and unlimited tape storage, a Dynex machine can operate, in principle, on an infinite number of continuous states, even if the number of memory processors is limited. In essence, each memprocessor is an analog device with a continuous set of state values.
Finally, it can be noticed that the formal definition of memprocessor and network of memprocessors is compatible with the function δα defined above. In fact, the topology and evolution of the network is associated with the stimuli s, while the control unit defines all possible δα ∈ ∆ in the sense that those can be obtained by applying a certain signal sα (which selects the index vector pα) to the network. The network evolution then determines x‘ while β and pβ (or better sβ) are defined by the control unit for the next processing step.
Dynex Neuromorphic Chip
The Dynex machine can be realized physically as a non-linear dynamical system, which is composed of point attractors that represent the solutions to the problem. It is possible to numerically integrate Dynex machines’ equations of motion, since they are non-quantum systems. The performance of similar machines on a wide variety of combinatorial optimization problems has already been demonstrated to be orders of magnitude faster than that of traditional algorithmic approaches.
Subsequently, by employing topological field theory, it was shown that the physical reason behind this efficiency rests on the dynamical long-range order that develops during the transient dynamics where avalanches (instantons in the field theory language) of different sizes are generated until the system reaches an attractor. The transient phase of the solution search therefore resembles that of several phenomena in Nature, such as earthquakes, solar flares or quenches.
Our Dynex Neuromorphic Chip utilizes GPUs to achieve close to real-time performance of a Dynex machine. It can be adapted freely to the problem to be computed and it can also be interconnected and operated as part of a cluster. Using the Dynex Neuromorphic Chip, problems that cannot be solved with classical or quantum methods can be solved, therefore eliminating the barrier posed by the von Neumann bottleneck. It may be used to solve hard optimization problems, to implement integer linear programming (ILP), to carry out machine learning (ML), to train deep neural networks or to improve computing efficiency generally.
The beautiful technical detail of the Dynex Chips is that at every point in time, the entire chip can be represented with a status vector, which means it can be stopped, reversed, exchanged and run in different time steps. This feature allows us to use it in an environment where the nodes can be constantly changing without interfering the effective computation.
This makes it ideal for operating in a malleable, decentralised environment.