Fangxin Liu @ Leon


Assistant Professor
ACA Lab, SEIEE
Shanghai Jiao Tong University

Email: liufangxin@sjtu.edu.cn
Wechat: lfx920701
Office: SEIEE 3-125, SJTU

800 DongChuan Road, SEIEE Building #03-125, Minhang District, Shanghai

Weibo
Github
Google Scholar
Twitter
LinkedIn

本站近期访问量:


My Friends:

Zongwu Wang (汪宗武): My junior fellow studied under Prof. Jiang, PhD Student

Haomin Li (李皓民): My junior fellow studied under Prof. Jiang, PhD Student

About me

Fangxin Liu is an Assistant Professor at Shanghai Jiao Tong University (SJTU), specializing in neural network acceleration (e.g., neural network compression and SW/HW co-design.), in-memory computing, and brain-inspired neuromorphic computing.

He obtained his Ph.D. Degree in Computer Science and Technology from Shanghai Jiao Tong University in 2023, under the supervision of Prof. Li Jiang. You can find more information about Prof. Jiang here.

The papers and related resources will be shared on my Github in the near future.


Research

Current research interests focuses on:

  • In-memory Computing (存内计算)
  • Brain-inspired Neuromorphic Computing (神经模态计算)
  • Neural Network Acceleration


What's News

Our paper "HyperAttack" has been accepted by DAC 2023!

Feb./24/2023

Our paper "SIMSnn" has been accepted by DATE 2023!

Nov./18/2022

I'm awarded National Scholarship again!

Sep./27/2022

Our paper "Randomize and Match" has been accepted by ICCD 2022!

Aug./22/2022

I'm awarded Spark Award from HUAWEI!

Jul./29/2022

Fangxin Liu is invited to deliver a talk about SNNs at AI Time

May/25/2022

Our paper "SoBS-X" has been accepted by IEEE TCAD!

Apr./17/2022

Two papers have been accepted by ICMR 2022!

Apr./06/2022

Our paper "L3E-HD" has been accepted by SIGIR 2022!

Apr./01/2022

Our paper "IVQ" has been accepted by IEEE TCAD!

Feb./23/2022

Three papers have been accepted by DAC 2022!

Feb./22/2022

Our paper has been selected by DATE 2022 as Best Paper in the T Track!

Feb./04/2022

Our paper "DynSNN" has been accepted by ICASSP 2022!

Jan./21/2022

Our paper "SpikeConverter" has been accepted by AAAI 2022!

Dec./01/2021

Two papers have been accepted by DATE 2022!

Nov./11/2021

Our paper "SSTDP" has been accepted by Frontiers in Neuroscience!

Oct./01/2021

I'm awarded National Scholarship!

Sep./23/2021

Our paper "HAWIS" has been accepted by ASP-DAC 2022!

Sep./12/2021

Our paper "SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network" has been accepted by ICCD'21 !

Aug./20/2021

Our paper "Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point" has been accepted by ICCV'21 !

Jul./23/2021

Our paper "Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator" has been accepted by ICCAD'21 !

Jul./13/2021

Our paper "IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration" has been accepted by GLSVLSI'21 !

Apr./12/2021

Our paper "PIMGCN: A ReRAM-based PIM Design for Graph Convolutional Network Acceleration" has been accepted by DAC'21 !

Feb./20/2021


Publication [Google Citation]

  • Fangxin Liu, et al. "Randomize and Match: Exploiting Irregular Sparsity for Energy Efficient Processing in SNNs ." to appear in 40th IEEE International Conference on Computer Design (ICCD'22). (Acceptance Rate: 24%).
  • Fangxin Liu, et al. "SoBS-X: Squeeze-Out Bit Sparsity for ReRAM-Crossbar-Based Neural Network Accelerator.", IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems (IEEE TCAD), 2022. (CCF Tier A). [Link]
  • Fangxin Liu, et al. "L3E-HD: A Framework Enabling Efficient Ensemble in High-Dimensional Space for Language Tasks.", to appear in 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'22 ). (Acceptance Rate: 24.7%). [Code] [Link]
  • Fangxin Liu, et al. "IVQ: In-Memory Acceleration of DNN Inference Exploiting Varied Quantization.", IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems (IEEE TCAD), 2022. (CCF Tier A). [Link]
  • Fangxin Liu, et al. "EBSP: Evolving Bit Sparsity Patterns for Hardware-Friendly Inference of Quantized Deep Neural Networks." to appear in 59th Design Automation Conference (DAC'22). (Acceptance Rate: 23%). [Link] [Preprint] [Slides]
  • Fangxin Liu, et al. "SATO: Spiking Neural Network Acceleration via Temporal-Oriented Dataflow and Architecture." to appear in 59th Design Automation Conference (DAC'22). (Acceptance Rate: 23%). [Link] [Preprint] [Slides]
  • Fangxin Liu, et al. "PIM-DH: ReRAM-based Processing-in-Memory Architecture for Deep Hashing Acceleration." to appear in 59th Design Automation Conference (DAC'22). (Acceptance Rate: 23%). [Link] [Preprint] [Slides]
  • Fangxin Liu, et al. "DynSNN: A Dynamic Approach to Reduce Redundancy in Spiking Neural Networks." to appear in 47th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'22). (CCF Tier B). [Link] [Preprint] [Poster]
  • Fangxin Liu, et al. "SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks." to appear in 36th AAAI Conference on Artificial Intelligence (AAAI'22), 2022. (Acceptance Rate: 15%). (Oral, top 5%). [Link] [Poster] [News] [Talk]
  • Fangxin Liu, et al. "SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training.", Frontiers in Neuroscience , 15 (2021). (Impact Factor 4.2) [Link] [Code]
  • Fangxin Liu, et al. "SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network." In Proceedings of 39th IEEE International Conference on Computer Design 2021 (ICCD'21). (Acceptance Rate: 24.4%). [Link] [Preprint] [Slides]
  • Fangxin Liu, et al. "Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point." In Proceedings of International Conference on Computer Vision 2021 (ICCV'21). (Acceptance Rate: 25.9%). [Link] [Code] [Poster]
  • Fangxin Liu, et al. "Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator." In Proceedings of 40th IEEE/ACM International Conference on Computer-Aided Design (ICCAD'21). (Acceptance Rate: 23.5%). [Link] [Preprint]
  • Fangxin Liu, et al. "IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration." In Proceedings of 31st Great Lakes Symposium on VLSI 2021 (GLSVLSI'21). (Acceptance Rate: 24%). [Link]
  • Fangxin Liu, et al. "AUSN: Approximately Uniform Quantization by Adaptively Superimposing Non-uniform Distribution for Deep Neural Networks." CoRR abs/2007.03903 (2020). [Link]
  • Yongbiao Chen, Fangxin Liu, et al. "TransHash: Transformer-based Hamming Hashing for Efficient Image Retrieval." to appear in ACM International Conference on Multimedia Retrieval 2022 (ICMR'22). (Acceptance Rate: 30%). [Link] [Preprint]
  • Yongbiao Chen, Fangxin Liu, et al. "Supervised Contrastive Vehicle Quantization for Efficient Vehicle Retrieval." to appear in ACM International Conference on Multimedia Retrieval 2022 (ICMR'22). (Acceptance Rate: 30%). [Link]
  • Zongwu Wang, Fangxin Liu, et al. "Self-Terminated Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing." to appear in 25th Design, Automation and Test in Europe Conference (DATE'22). (Best Paper Award). [Link] [Preprint] [Slides]
  • Tao Yang, Fangxin Liu, et al. "DTQAtten: Leveraging Dynamic Token-based Quantization for Efficient Attention Architecture." to appear in 25th Design, Automation and Test in Europe Conference (DATE'22). (nominated for best paper award). [Link] [Preprint] [Slides]
  • Qidong Tang, Fangxin Liu, et al. "HAWIS: Hardware-Aware Automated WIdth Search for Accurate, Energy-Efficient and Robust Binary Neural Network on ReRAM Dot-Product Engine." In Proceeding of 27th Asia and South Pacific Design Automation Conference (ASP-DAC'22). (Acceptance Rate: 36.5%). [Link] [Preprint] [Slides]
  • Tao Yang, Fangxin Liu, et al. "PIMGCN: A ReRAM-based PIM Design for Graph Convolutional Network Acceleration." In Proceeding of Design Automation Conference  ( DAC'21). [Link]

  • Awards

  • National Scholarship, 2022.
  • National Scholarship, 2021.
  • Wu Wen Jun Honorary Doctoral Scholarship, 2021.
  • Best Paper Award from DATE, 2022.
  • Spark Award from HUAWEI, 2022.

  • Academic Activities

    Conference Reviewers
    • AAAI Conference on Artificial Intelligence (AAAI), 2023
    • Great Lakes Symposium on VLSI (GLSVLSI), 2022
    • Design Automation Conference (DAC), 2022
    • Great Lakes Symposium on VLSI (GLSVLSI), 2021


    Selected Projects


    Compression and acceleration on Neuromorphic Computing


    Brain-inspired neuromorphic computing aims to understand the cognitive mechanisms of a brain and apply them to advance various areas in computer science. Recently, extensive efforts have been attracted by spiking neural networks (SNNs) due to their low-power and biologically plausible nature. Despite the potential benefits of supporting SNNs, existing works fail to efficiently support them due to their software-based frameworks or hardware-based but time-driven execution mechanisms. I am currently designing a SW/HW co-design framework, dedicated to serving better accuracy and inference efficiency.


    Compression and acceleration on Large-Scale Models


    Nowadays, model sizes keep increasing too large to be stored on a single gas pedal; for instance, 175 billion parameters of GPT-3 require 350Gib of main memory if stored parameters with 16-bit. In addition, the memory required for activation, gradients, etc. during training is at least three times the model memory requirement. When the large-scale model (e.g., foundation model and LLM) is deployed in practice, the model is fine-tuned to generalize to specific downstream tasks using different data depending on the downstream tasks. I am currently designing a SW/HW co-design framework, dedicated to optimize storage and execution efficiency.


    Compression and acceleration on Databases


    A pressing demand emerges for storing extremely large-scale high-dimensional data generated by industry and academia at an increasing speed. Data compression techniques can lower expenses and save storage maintenance efforts. I am currently trying to compress high-dimensional data with an efficient store and query engine.


    Thanks to Vasilios Mavroudis for the template!