In recent years there is a growing interest for neuroscience and the prospect of understanding the human brain, as well as for using these findings to address some limitations of today’s technology. Principles of neural computation, which are defined by massive parallel processing, distributed information storage and adaptation can be utilized in order to enhance, or even revolutionize computer systems and technology. Realizing this great potentiality, especially the importance of understanding the human brain, the research community has lunched remarkable projects to support the field of computational neuroscience that studies the information processing properties of the nervous system. A notable ongoing project is the Blue Brain  at the Ecole Polytechnique Federale de Lausanne (EPFL), which seeks to simulate 10.000 neurons of the rat brain, by using detailed studies of the nervous system. Another project at IBM Almaden Research Center in California, focused on understanding the cortex that is the outer processing layer of the brain. Biological neuron models are employed for the simulation of 900 million neurons connected by 9 trillion synapses . Both projects lead to great research findings, but rely on large-scale simulations in High-Performance Computing (HPC) clusters , known as supercomputers, like the IBM BlueGene/P. These simulations have the disadvantage of requiring a lot of execution time, since the simulation of seconds brain activity requires many minutes of processing on these computers. This is a result of the complexity given by the large amount of studied parameters of the neural model and the drawbacks of the von-Neumann computing architecture that all the HPC clusters employ. Taking into account the necessity of scaling up for the employed neural models to describe in more detail the brain function and structure, this drawback could be critical calling for alternative approaches. Moreover the energy harvesting computation of HPC clusters is another important issue that increases significantly the cost of these simulations. With the existing HPC technology the simulation of a neural model that describes the activity in the scale of a whole brain, would require a supercomputer 1.000 times more powerful as the best existing today, with power requirements equal to the energy that a large city needs .
Neuromorphic systems use a radical different architecture compared to the conventional computer systems. Their structure and function resemble the nervous system, thereby advancing enormously the realism to study neural models. This allows to move from simulations of a neural model to emulations, by using brain-like hardware systems that duplicate more faithfully the behavior of neural networks. Neuromorphic systems can mimic more closely the parallel processing of biological neural networks. Moreover their re-configurability is an important asset to replicate the plasticity and flexibility that the human brain demonstrates. The most important difference between the neuromorphic and conventional systems is that the latter use the von-Neumann architecture with the central processing unit to be physically separated from the memory, whereas the former are characterized by a distributing memory; the synapses of the network can implement at the same time memory and complex computation . The rapid development of very-large-scale integration technology (VLSI) for chip fabrication has given the opportunity to place hundreds of thousands of electronic components on a single circuitry. This increases significantly the density of computational components and it can boost the size of neuromorphic systems that are steadily increased to handle progressively more advanced computational tasks. Dedicated hardware can offer a high-speed execution of large neural models in silicon in an affordable way compared to HPC clusters. All the above allow the large-scale neuromorphic systems to represent an attractive alternative to conventional numerical software simulations (HPC), which face the problems of increasing computational times, power consumption and model scalability.
Research publications in this field have greatly increased in number, which implies the strong potential of neuromorphic engineering. One research project that aims to develop a neuromorphic system is the Neurogrid  from Stanford University. The project exploits the non-linear characteristics of the transistors in order to emulate the behavior of real neural cells in silicon. The final hardware platform will be a multi-chip neuromorphic system that can implement and emulate many different cortical areas in real time and reveal their interactions. This approach uses sub-threshold mixed-signal circuits that consume the same amount of current as the cells in biology. This approach slows down the execution speed to a biological realistic rate, providing the ground for the simulation of million cortical neurons in real-time. An asynchronous-multicast digital communication implements the synaptic connections of the network. A variety of neuronal cells and synaptic interactions can be implemented taking advantage from the full re-programmability of the system. However external resources must be used for the realization of synaptic plasticity mechanisms, since the system employs currently only linear synapses . The aim of the Neurogrid project is the simulation in real-time of millions of neurons connected by billions of synapses in an affordable way, providing the benefit of a power efficient system with energy consumption orders of magnitude less than in HPC clusters.
Another notable project run from University of Manchester seeks to address the same problem by using commodity digital microprocessors connected to a dedicated pulse communication architecture for neural simulations. The project is called SpiNNaker  and it aims to model brain activity in real-time, offering high flexibility and programmability like a general purpose computer does. For this purpose a computing platform is being developed that offers 57.600 custom-designed chips that can implement 18 low-power ARM9 processor cores. At the center of each chip a dedicated router receives and forwards all the incoming packets from/to the neighboring counterparts. A synchronous dynamic RAM on the top of each chip holds the connectivity information for up to 16 million synaptic connections. This digital system approach has mainly two advantages. First is the optimized routing and distribution of data packets (or events) with a dedicated global asynchronous communication . Second is the high programmability of the overall digital system that is particularly viable for the replication of natural flexibility and plasticity of the brain. However this system approach due its conventional architecture, reflects the boundaries of a typical von Neumann computing system. The data transfer rate (throughput) between the processor and the memory is limited due to the shared memory bus. In particular for this system the more complex the simulated model is, the fewer elements can be simulated by the given platform (limited scaling).
An alternative to the above approach would be a neuromorphic system build from highly integrated mixed-signal circuits for the emulation of individual neurons, using a dedicated communication for the inter-connectivity. Such a system is being developed from the University of Heidelberg (UHEI) and Technische Universität Dresden (TUD), consisting the hardware platform of the BrainScaleS project. This approach uses wafer-scale integration technology  and custom-made analog circuits to replicate the behavior of neural cells at large-scale with multiple silicon interconnected wafers. Each wafer module implements up to 200.000 analog neurons and 40 million learning synapses, interconnected via a high-density routing grid on wafer. An important feature of this system is the accelerated computation compared to biological real time by a factor of 10.000. This way the required simulation time of an implemented neural network is shrinked from hours to seconds. The proposed system allows for power efficiency taking advantage from the analog neural computation of dedicated implemented hardware comparing to HPC clusters. This along with the provided sufficient re-configurability and the accelerated computation forms an attractive alternative over the simulations in HPC clusters. However the system requires large bandwidth achievements and fast digital circuits for the communication in order to cope with the performed acceleration. The overall system represents a flexible research platform to study the dynamics of large-scale biologically inspired neural networks with an affordable way.
Finally another notable system developed for the simulation of large-scale neural networks in real-time is the TrueNorth from IBM . The project takes inspiration from the structure of biological neural systems, departing this way from a typical von Neumann architecture, in order to deliver a new hardware architecture and computational paradigm. One single chip consists of 4.096 cores, with each to simulate up to 256 neurons and implement 256×256 programmable synaptic connections. The chip is implemented fully digital and the routing of the events is asynchronous, but not flexible in the same degree as in SpiNNaker. The overall system composed from a multi-chip infrastructure demonstrates a massive parallel computation with fault tolerance and distributed memory. However, the so far implemented synapses do not realize any plasticity mechanisms making any online learning impossible .
From all the described above systems are derived four different kind of approaches for developing large-scale neuromorphic systems; custom mixed-signal , custom subthershold analog , custom fully digital  and conventional microprocessors . The mixed-signal can refer to analog neural computation with digital implemented synapses , . It can provide in same extend physical implementations of neural models and thus a platform for neural emulation. A fully digital approach implements both neurons and synapses as digital circuitry, offering a binary implementation of the model and simulation of the neural model. Design parameters vary from approach to approach. A custom digital system is much more re-configurable than a mixed-signal or an analog one. On the other hand the analog or mixed-signal systems are more power efficient than a digital one. However, as the CMOS technology advances towards better technology (referred to CMOS semiconductor device fabrication), this power gap between analog and digital domain is minimized. Furthermore, as concerns the integration density of components, it is more or less for all the above approaches at the same level. Except from the differences all the approaches share some common features like the massive parallelism, the configurability and the asynchronous communication.
 “The Blue Brain Project.” [Online]. Available: http://bluebrain.epfl.ch/
 R. Ananthanarayanan, S. K. Esser, H. D. Simon, and D. S. Modha, “The cat is out of the bag: cortical simulations with 109 neurons, 1013 synapses,” in Proceedings of the Conference on High Performance Computing Networking, Storage and Analysis, pp. 1-12, Nov. 2009.
 Pablo García-Risueño, Pablo E. Ibáñez, “A review of high performance computing foundations for scientists,” International Journal of Modern Physics C, vol. 23, no. 07, p. 1230001, July 2012.
 S. Furber, “To build a brain,” IEEE Spectrum, 2012.
 G. Indiveri and S.-C. Liu, “Memory and information processing in neuromorphic systems.”, Proceedings of the IEEE, vol. 103, no. 8, pp. 1379-1397, Aug. 2015.
 B. Benjamin, P. Gao, E. McQuinn, S. Choudhary, A. Chandrasekaran, J.-M. Bussat, R. Alvarez-Icaza, J. Arthur, P. Merolla, and K. Boahen, “Neurogrid: A mixed-analog digital multichip system for large-scale neural simulations,” Proceedings of the IEEE, vol. 102, no. 5, pp. 699–716, May 2014.
 S. Furber, F. Galluppi, S. Temple, and L. Plana, “The SpiNNaker Project,” Proceedings of the IEEE, vol. 102, no. 5, pp. 652–665, May 2014.
 L. Plana, S. B. Furber, S. Temple, M. Khan, Y. Shi, J. Wu, S. Yang, et al., “A GALS infrastructure for a massively parallel multiprocessor,” Design & Test of Computers, IEEE, vol. 24, no. 5, pp. 454-463, 2007.
 J. Schemmel and D. Briiderle and A. Griibl and M. Hock and K. Meier and S. Millner, “A wafer-scale neuromorphic hardware system for large-scale neural modeling,” in IEEE International Symposium on Circuits and Systems (ISCAS’10), pp. 1947–1950, 2010.
 P. Merolla, J. Arthur, R. Alvarez-Icaza, A. Cassidy, J. Sawada, F. Akopyan, B. Jackson, N. Imam, C. Guo, Y. Nakamura, B. Brezzo, I. Vo, S. Esser, R. Appuswamy, B. Taba, A. Amir, M. Flickner, W. Risk, R. Manohar, and D. Modha, “A million spiking-neuron integrated circuit with a scalable communication network and interface,” Science, pp. 668–673, August 2014.