Dynex
Dynex Protocol
Bitcoin has been a successful implementation of the concept of p2p electronic cash. Both professionals and the general public have come to appreciate the convenient combination of public transactions and proof-of-work as a trust model. Today, the user base of electronic cash is growing at a steady pace; customers are attracted to low fees and the anonymity provided by electronic cash and merchants value its predicted and decentralised emission. Bitcoin has effectively proved that electronic cash can be as simple as paper money and as convenient as credit cards. Unfortunately, Bitcoin suffers from several deficiencies.
The core component of any blockchain system is its consensus protocol and Dynex utilises an egalitarian Proof of Work (PoW) consensus protocol which shows several advantages over traditional one-CPU-one-vote algorithms:
It is well known that one of the most significant problems with a PoW system is the development of specialized hardware (ASICs), which allows a small group of ASIC-equipped miners to solve PoW puzzles orders of magnitude faster and more efficiently than anyone else. Memory-hard PoW schemes can solve this problem by reducing the disparity between ASICs and commodity hardware. We believe that the most promising approach is to use asymmetric memory-hard PoW schemes that require significantly less memory to verify a solution than to discover it. Secondly, a PoW network’s decentralization is threatened by the fact that even large miners tend to form mining pools, leading to a situation in which just a few pool operators (5 in Bitcoin and 2 in Ethereum at the time of writing) control more than 51% of computing power. Our protocol is both memory-hard and pool-resistant.
Egalitarian Proof-of-work
In this section we detail our proof-of-work algorithm. Our primary goal is to close the gap between CPU (majority) and GPU/FPGA/ASIC (minority) miners. It is appropriate that some users can have a certain advantage over others, but their investments should grow at least linearly with the power. More generally, producing special-purpose devices has to be as less profitable as possible. The original Bitcoin proof-of-work protocol uses the CPU-intensive pricing function SHA-256. It mainly consists of basic logical operators and relies solely on the computational speed of processor, therefore is perfectly suitable for multicore/conveyer implementation. However, modern computers are not limited by the number of operations per second alone, but also by memory size. While some processors can be substantially faster than others, memory sizes are less likely to vary between machines.
Memory-bound price functions were first introduced by Abadi et al and were defined as “functions whose computation time is dominated by the time spent accessing memory”. The main idea is to construct an algorithm allocating a large block of data (“scratchpad”) within memory that can be accessed relatively slowly (for example, RAM) and “accessing an unpredictable sequence of locations” within it. A block should be large enough to make preserving the data more advantageous than recomputing it for each access. The algorithm also should prevent internal parallelism, hence N simultaneous threads should require N times more memory at once.
Dwork et al investigated and formalized this approach leading them to suggest another variant of the pricing function: “Mbound”. One more work belongs to F. Coelho, who proposed the most effective solution: “Hokkaido”. To our knowledge the last work based on the idea of pseudo-random searches in a big array is the algorithm known as “scrypt” by C. Percival. Unlike the previous functions it focuses on key derivation, and not proof-of-work systems. Despite this fact scrypt can serve our purpose: it works well as a pricing function in the partial hash conversion problem such as SHA-256 in Bitcoin.
By now scrypt has already been applied in Litecoin and some other Bitcoin forks. How- ever, its implementation is not really memory-bound: the ratio “memory access time / overall time” is not large enough because each instance uses only 128 KB. This permits GPU miners to be roughly 10 times more effective and continues to leave the possibility of creating relatively cheap but highly-efficient mining devices. Moreover, the scrypt construction itself allows a linear trade-off between memory size and CPU speed due to the fact that every block in the scratchpad is derived only from the previous. For example, you can store every second block and recalculate the others in a lazy way, i.e. only when it becomes necessary. The pseudo-random indexes are assumed to be uniformly distributed, hence the expected value of the additional blocks’ recalculations is 21 ·N, where N is the number of iterations. The overall computation time increases less than by half because there are also time independent (constant time) operations such as preparing the scratchpad and hashing on every iteration. Saving 2/3 of the memory costs 31 · N + 13 · 2 · N = N additional recalculations; 9/10 results in 1 ·N +…+ 1 ·9·N = 4.5N. It is easy to show that storing only 1 of all blocks 1010 s increases the time less than by a factor of s−1 . This in turn implies that a machine with a CPU 2 200 times faster than the modern chips can store only 320 bytes of the scratchpad.
Our algorithm is a memory-bound algorithm for the proof-of-work pricing function. It relies on random access to a slow memory and emphasizes latency dependence. As opposed to scrypt every new block (64 bytes in length) depends on all the previous blocks. As a result a hypothetical “memory-saver” should increase his calculation speed exponentially. It requires around 2MB per instance:
Fits in the L3 cache (per core) of modern processors, which should become mainstream in a few years;
A megabyte of internal memory is an almost unacceptable size for a modern ASIC pipeline;
GPUs may run hundreds of concurrent instances, but they are limited in other ways: GDDR5 memory is slower than the CPU L3 cache and remarkable for its bandwidth, not random access speed.
Significant expansion of the scratchpad would require an increase in iterations, which in turn implies an overall time increase. “Heavy” calls in a trust-less p2p network may lead to serious vulnerabilities, because nodes are obliged to check every new block’s proof-of-work. If a node spends a considerable amount of time on each hash evaluation, it can be easily DDoSed by a flood of fake objects with arbitrary work data (nonce values).
Reference: “Crypto Note v 2.0”; unknown author under the pseudonym Nicolas van Saberhagen; 2013