Fangxin Liu @ Leon


PhD student
ACA Lab, SEIEE
Shanghai Jiao Tong University

Email: liufangxin@sjtu.edu.cn
Wechat: lfx920701
Office: SEIEE 3-125, SJTU

800 DongChuan Road, SEIEE Building #03-125, Minhang District, Shanghai

Weibo
Github
Google Scholar
Twitter
LinkedIn

本站近期访问量:


My Friends:

Cheng Deng (邓程): Wenjun Wu Honored Ph.D Class, PhD Student

Zongwu Wang (汪宗武): My junior fellow studied under Prof. Jiang, PhD Student

About me

After obtaining master degree in computer science and technology from PLA Science and Technology University in Nanjing, Jiangsu Province in June 2017, I worked in National Internet Emergency Center (CNCERT/CC) Shanghai Branch from July 2017 to July 2019. Currently, I am chasing my Ph.D. degree in CS. in Shanghai Jiao Tong University Advanced Computer Architecture Laboratory (ACA Lab), supervised by Prof. Li Jiang, focusing on the research of efficient AI/ML algorithms, in-memory computing, software-hardware co-design, and neuromorphic computing. I am also selected into Wenjun Wu Honored Ph.D Class.

The papers and related resources will be shared on my Github in the near future.


Research

Current research interests focuses on:

  • In-memory Computing (存内计算)
  • Brian-inspired Neuromorphic Computing (神经模态计算)
  • Neural Network Acceleration


What's News

I'm awarded National Scholarship again!

Sep./27/2022

Our paper "Randomize and Match" has been accepted by ICCD 2022!

Aug./22/2022

I'm awarded Spark Award from HUAWEI!

Jul./29/2022

Fangxin Liu is invited to deliver a talk about SNNs at AI Time

May/25/2022

Our paper "SoBS-X" has been accepted by IEEE TCAD!

Apr./17/2022

Two papers have been accepted by ICMR 2022!

Apr./06/2022

Our paper "L3E-HD" has been accepted by SIGIR 2022!

Apr./01/2022

Our paper "IVQ" has been accepted by IEEE TCAD!

Feb./23/2022

Three papers have been accepted by DAC 2022!

Feb./22/2022

Our paper has been selected by DATE 2022 as Best Paper in the T Track!

Feb./04/2022

Our paper "DynSNN" has been accepted by ICASSP 2022!

Jan./21/2022

Our paper "SpikeConverter" has been accepted by AAAI 2022!

Dec./01/2021

Two papers have been accepted by DATE 2022!

Nov./11/2021

Our paper "SSTDP" has been accepted by Frontiers in Neuroscience!

Oct./01/2021

I'm awarded National Scholarship!

Sep./23/2021

Our paper "HAWIS" has been accepted by ASP-DAC 2022!

Sep./12/2021

Our paper "SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network" has been accepted by ICCD'21 !

Aug./20/2021

Our paper "Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point" has been accepted by ICCV'21 !

Jul./23/2021

Our paper "Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator" has been accepted by ICCAD'21 !

Jul./13/2021

Our paper "IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration" has been accepted by GLSVLSI'21 !

Apr./12/2021

Our paper "PIMGCN: A ReRAM-based PIM Design for Graph Convolutional Network Acceleration" has been accepted by DAC'21 !

Feb./20/2021

… see more


Publication [Google Citation]

  • Fangxin Liu, et al. "Randomize and Match: Exploiting Irregular Sparsity for Energy Efficient Processing in SNNs ." to appear in 40th IEEE International Conference on Computer Design (ICCD'22). (Acceptance Rate: 24%).
  • Fangxin Liu, et al. "SoBS-X: Squeeze-Out Bit Sparsity for ReRAM-Crossbar-Based Neural Network Accelerator.", IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems (IEEE TCAD), 2022. (CCF Tier A). [Link]
  • Fangxin Liu, et al. "L3E-HD: A Framework Enabling Efficient Ensemble in High-Dimensional Space for Language Tasks.", to appear in 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'22). (Acceptance Rate: 24.7%). [Code] [Link]
  • Fangxin Liu, et al. "IVQ: In-Memory Acceleration of DNN Inference Exploiting Varied Quantization.", IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems (IEEE TCAD), 2022. (CCF Tier A). [Link]
  • Fangxin Liu, et al. "EBSP: Evolving Bit Sparsity Patterns for Hardware-Friendly Inference of Quantized Deep Neural Networks." to appear in 59th Design Automation Conference (DAC'22). (Acceptance Rate: 23%). [Link] [Preprint] [Slides]
  • Fangxin Liu, et al. "SATO: Spiking Neural Network Acceleration via Temporal-Oriented Dataflow and Architecture." to appear in 59th Design Automation Conference (DAC'22). (Acceptance Rate: 23%). [Link] [Preprint] [Slides]
  • Fangxin Liu, et al. "PIM-DH: ReRAM-based Processing-in-Memory Architecture for Deep Hashing Acceleration." to appear in 59th Design Automation Conference (DAC'22). (Acceptance Rate: 23%). [Link] [Preprint] [Slides]
  • Fangxin Liu, et al. "DynSNN: A Dynamic Approach to Reduce Redundancy in Spiking Neural Networks." to appear in 47th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'22). (CCF Tier B). [Link] [Preprint] [Poster]
  • Fangxin Liu, et al. "SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks." to appear in 36th AAAI Conference on Artificial Intelligence (AAAI'22), 2022. (Acceptance Rate: 15%). (Oral, top 5%). [Link] [Poster] [News] [Talk]
  • Fangxin Liu, et al. "SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.", Frontiers in Neuroscience, 15 (2021). (Impact Factor 4.2) [Link] [Code]
  • Fangxin Liu, et al. "SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network." In Proceedings of 39th IEEE International Conference on Computer Design 2021 (ICCD'21). (Acceptance Rate: 24.4%). [Link] [Preprint] [Slides]
  • Fangxin Liu, et al. "Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point." In Proceedings of International Conference on Computer Vision 2021 (ICCV'21). (Acceptance Rate: 25.9%). [Link] [Code] [Poster]
  • Fangxin Liu, et al. "Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator." In Proceedings of 40th IEEE/ACM International Conference on Computer-Aided Design (ICCAD'21). (Acceptance Rate: 23.5%). [Link] [Preprint]
  • Fangxin Liu, et al. "IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration." In Proceedings of 31st Great Lakes Symposium on VLSI 2021 (GLSVLSI'21). (Acceptance Rate: 24%). [Link]
  • Fangxin Liu, et al. "AUSN: Approximately Uniform Quantization by Adaptively Superimposing Non-uniform Distribution for Deep Neural Networks." CoRR abs/2007.03903 (2020). [Link]
  • Yongbiao Chen, Fangxin Liu, et al. "TransHash: Transformer-based Hamming Hashing for Efficient Image Retrieval." to appear in ACM International Conference on Multimedia Retrieval 2022 (ICMR'22). (Acceptance Rate: 30%). [Link] [Preprint]
  • Yongbiao Chen, Fangxin Liu, et al. "Supervised Contrastive Vehicle Quantization for Efficient Vehicle Retrieval." to appear in ACM International Conference on Multimedia Retrieval 2022 (ICMR'22). (Acceptance Rate: 30%). [Link]
  • Zongwu Wang, Fangxin Liu, et al. "Self-Terminated Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing." to appear in 25th Design, Automation and Test in Europe Conference (DATE'22). (Best Paper Award). [Link] [Preprint] [Slides]
  • Tao Yang, Fangxin Liu, et al. "DTQAtten: Leveraging Dynamic Token-based Quantization for Efficient Attention Architecture." to appear in 25th Design, Automation and Test in Europe Conference (DATE'22). (nominated for best paper award). [Link] [Preprint] [Slides]
  • Qidong Tang, Fangxin Liu, et al. "HAWIS: Hardware-Aware Automated WIdth Search for Accurate, Energy-Efficient and Robust Binary Neural Network on ReRAM Dot-Product Engine." In Proceeding of 27th Asia and South Pacific Design Automation Conference (ASP-DAC'22). (Acceptance Rate: 36.5%). [Link] [Preprint] [Slides]
  • Tao Yang, Fangxin Liu, et al. "PIMGCN: A ReRAM-based PIM Design for Graph Convolutional Network Acceleration." In Proceeding of Design Automation Conference  (DAC'21). [Link]

  • Awards

  • National Scholarship, 2022.
  • National Scholarship, 2021.
  • Wu Wen Jun Honorary Doctoral Scholarship, 2021.
  • Best Paper Award from DATE, 2022.
  • Spark Award from HUAWEI, 2022.

  • Academic Activities

    Conference Reviewers
    • AAAI Conference on Artificial Intelligence (AAAI), 2023
    • Great Lakes Symposium on VLSI (GLSVLSI), 2022
    • Design Automation Conference (DAC), 2022
    • Great Lakes Symposium on VLSI (GLSVLSI), 2021


    Projects

  • [NEURO-MIMETIC LEARNING ALGORITHMS]: I am currently creating a neuron-mimetic learning framework, dedicated to serving better accuracy and inference efficiency.
  • … see more


    Thanks to Vasilios Mavroudis for the template!