ACA Lab, SEIEE
Shanghai Jiao Tong University
Office: SEIEE 3-125, SJTU
After obtaining master degree in computer science and technology from PLA Science and Technology University in Nanjing, Jiangsu Province in June 2017, I worked in National Internet Emergency Center (CNCERT/CC) Shanghai Branch from July 2017 to July 2019. Currently, I am chasing my Ph.D. degree in CS. in Shanghai Jiao Tong University Advanced Computer Architecture Laboratory (ACA Lab), supervised by Prof. Li Jiang, focusing on the research of efficient AI/ML algorithms, in-memory computing, software-hardware co-design, and neuromorphic computing. I am also selected into Wenjun Wu Honored Ph.D Class.
The papers and related resources will be shared on my Github in the near future.
Current research interests focuses on:
I'm awarded National Scholarship again!
Our paper "Randomize and Match" has been accepted by ICCD 2022!
I'm awarded Spark Award from HUAWEI!
Fangxin Liu is invited to deliver a talk about SNNs at AI Time
Our paper "SoBS-X" has been accepted by IEEE TCAD!
Two papers have been accepted by ICMR 2022!
Our paper "L3E-HD" has been accepted by SIGIR 2022!
Our paper "IVQ" has been accepted by IEEE TCAD!
Three papers have been accepted by DAC 2022!
Our paper has been selected by DATE 2022 as Best Paper in the T Track!
Our paper "DynSNN" has been accepted by ICASSP 2022!
Our paper "SpikeConverter" has been accepted by AAAI 2022!
Two papers have been accepted by DATE 2022!
Our paper "SSTDP" has been accepted by Frontiers in Neuroscience!
I'm awarded National Scholarship!
Our paper "HAWIS" has been accepted by ASP-DAC 2022!
Our paper "SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network" has been accepted by ICCD'21 !
Our paper "Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point" has been accepted by ICCV'21 !
Our paper "Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator" has been accepted by ICCAD'21 !
Our paper "IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration" has been accepted by GLSVLSI'21 !
Our paper "PIMGCN: A ReRAM-based PIM Design for Graph Convolutional Network Acceleration" has been accepted by DAC'21 !