Flash-Based Computing-in-Memory Architecture to Implement High-Precision Sparse Coding
Yueran Qi, Yang Feng, Hai Wang, Chengcheng Wang, Maoying Bai, Jing Liu, Xuepeng Zhan, Jixuan Wu, Qianwen Wang, Jiezhi Chen- Electrical and Electronic Engineering
- Mechanical Engineering
- Control and Systems Engineering
To address the concerns with power consumption and processing efficiency in big-size data processing, sparse coding in computing-in-memory (CIM) architectures is gaining much more attention. Here, a novel Flash-based CIM architecture is proposed to implement large-scale sparse coding, wherein various matrix weight training algorithms are verified. Then, with further optimizations of mapping methods and initialization conditions, the variation-sensitive training (VST) algorithm is designed to enhance the processing efficiency and accuracy of the applications of image reconstructions. Based on the comprehensive characterizations observed when considering the impacts of array variations, the experiment demonstrated that the trained dictionary could successfully reconstruct the images in a 55 nm flash memory array based on the proposed architecture, irrespective of current variations. The results indicate the feasibility of using Flash-based CIM architectures to implement high-precision sparse coding in a wide range of applications.