Keynote Speaker

Prof. Santosh Kumar Vishvakarma

Department of Electrical Engineering, IIT Indore

https://sites.google.com/site/svishvakarma/?pli=1&authuser=1

Prof. Santosh Kumar Vishvakarma received the M.Tech. degree in microelectronics from Punjab University, Chandigarh, India, in 2003, and the Ph.D. degree from IIT Roorkee, Roorkee, India, in 2010. From 2009 to 2010, he was with the University Graduate Center, Kjeller, Norway, as a Post-Doctoral Fellow with Prof. T. A. Fjeldly under the European Union Project COMON on compact modeling development and parameter extraction of multi-gate MOSFETs. He is currently with the School of Engineering, IIT Indore, Indore, as an Assistant Professor, where he is leading the Nanoscale Devices, VLSI Circuit and System Design Laboratory. His current research includes nanoscale devices and circuits, ultralow-power digital and analog circuit design and their technology, FPGA-based design, power reduction techniques in FPGA-based system design, multi-gate and multi-fin MOSFET, and tunnel FET and their circuit applications in memories. Recently, he is involved in high speed transceiver design and graphene-based digital standard cell design. He is also interested in Internet of Things for healthcare and defence applications.

Title of the Talk: In memory Computation for EDGE AI

Abtract of the Talk: Compute-in-memory (CIM) is a new computing paradigm that addresses the von-Neumann bottleneck in hardware accelerator design for deep learning. The input vector and weight matrix multiplication, i.e., the multiply-and-accumulate (MAC) operation, could be performed in the analog domain within memory sub-array, leading to significant improvements in throughput and energy efficiency. Static random-access memory (SRAM) and emerging non-volatile memories such as resistive random access memory (RRAM) are promising candidates to store the weights of deep neural network (DNN) models. We will discuss the recent progress in SRAM and RRAM based CIM macros that have been demonstrated in silicon and FPGA implementation. Then we discuss general design challenges of the CIM chips. In-memory computing (IMC) architectures exhibit an intrinsic trade-off between computational accuracy and energy efficiency. Ultra-low precision networks such as binary neural networks (BNNs) have gained momentum in recent times, since the reduced precision alleviates the costs associated with storage, computation, and communication, enabling inference at the edge. Resistive Random Access Memory (RRAM) crossbar-based BNN accelerators have shown tremendous potential in boosting the speed and energy efficiency of compute intensive Deep Learning applications at the edge.