Prof. Boris MurmannAffiliation:
Stanford University, USA
Mixed-Signal Computing for Deep Neural Network Inference
Modern deep neural networks (DNNs) require billions of multiply-accumulate operations per inference. Given that these computations require relatively low precision, it is feasible to consider analog arithmetic, which can be more efficient than digital in the low-SNR regime. However, the scale of DNNs favors circuits that leverage dense digital memory, leading to mixed-signal processing schemes for scalable solutions. This presentation will investigate the potential of mixed-signal approaches in the context of modern DNN processor architectures, which are typically limited by data movement and memory access. We will show that dense mixed-signal fabrics offer new degrees of freedom that can help alleviate these bottlenecks. In addition, we will derive asymptotic efficiency limits and highlight the challenges associated with data conversion interfaces (D/A and A/D) as well as programmability. Finally, these findings are extended to in-memory computing approaches (SRAM and RRAM-based) that are bound by similar constraints.
Date: Mon 17 Aug 2020
Start Time: 17:00
End Time: 18:00
ZOOM Meeting | Electrical Eng. Building