Welcome to Fangxin Liu’s Homepage~

Fangxin (Leon) Liu is an Assistant Professor at Shanghai Jiao Tong University (SJTU), specializing in neural network acceleration (e.g., neural network compression and SW/HW co-design.), in-memory computing, and brain-inspired neuromorphic computing.

He obtained his Ph.D. Degree in Computer Science and Technology from Shanghai Jiao Tong University in 2023, under the supervision of Prof. Li Jiang.

The papers and related resources will be shared on my Github in the near future.

News

Nov./13/2024 Our five papers on the brain-inspired HDC and LLM acceleration have been accepted by DATE 2025! Congratulations to Haomin, Zongwu, Ning Yang.

Nov./03/2024 Our two papers on the Acceleration with Sparse Compilation Optimization and Guassian Splatting have been accepted by HPCA 2025! Congratulations to Shiyuan.

Nov./01/2024 Our paper “SearchQ” on the post-training & data-free compression on LLMs have been accepted by TCAS-AI 2024! Congratulations to Ning Yang.

Oct./03/2024 Our Paper “STCO” on the AI Compilation Optimization have been accepted by TODAES 2024! Congratulations to Shiyuan.

Sep./08/2024 Our two papers on the PQ Compression and SNN Quantization have been accepted by ASP-DAC 2025! Congratulations to Zongwu and Haomin.

Aug./26/2024 I received a grant from the NSFC Youth Fund titled “Neural Network Hardware-Software Co-design Architecture Based on Adaptive Compression Ecoding”.

Aug./22/2024 Our paper on a novel compiler plug-in for efficient SpMM has been accepted by IEEE TCAD 2024! Congratulations to Shiyuan.

Aug./2/2024 Our four papers on the Accelerations of LLMs, SNNs, and nested address translation in virtualized environments have been accepted by ICCD 2024! Congratulations to Ning Yang, Zongwu and Longyu.

Jul./18/2024 Our two papers on the Accelerators of Spiking Neural Networks and Neural Radiance Fields have been accepted by MICRO 2024! Congratulations to Zongwu.

Jul./06/2024 Fangxin Liu received 2023 ACM Shanghai Doctoral Dissertation Award.

Jun./07/2024 Our paper on SNN Accelerator has been accepted by TPDS 2024!

May/25/2024 Our three papers on DRAM PIM, video acceleration, and LLM acceleration have been accepted by ChinaSys24. We look forward to sharing them in Hangzhou!

May/20/2024 Our “LowPASS” A ReRAM-based SNN Accelerator has been accepted by ISLPED 2024! Congratulations to Zongwu.

Mar./20/2024 Our “UM-PIM” Data re-layouted machnism has been accepted by ISCA 2024! Congratulations to Yilong.

Mar./18/2024 Fangxin Liu received 2023 Shanghai CCF Outstanding Dissertation Award.

Feb./27/2024 Our 2 papers have been accepted by DAC 2024! Congratulations to Ning Yang.

Dec./29/2023 The “SPARK” acceleration framework has been reported in Jiqizhixin(机器之心).

Nov./12/2023 Our paper “RTSA” has been accepted by DATE 2024! Congratulations to Jiahao.

Oct./23/2023 Our paper “SPARK” has been accepted by HPCA 2024! Congratulations to Ning Yang.

Sep./09/2023 Our 4 papers have been accepted by ASP-DAC 2023! Congratulations to Haomin, Shiyuan, Tian Li.

Aug./25/2023 Our paper “PSQ” has been accepted by ICCD 2023! Congratulations to Ning Yang.

Jul./21/2023 Our paper “HyperNode” has been accepted by ICCAD 2023! Congratulations to Haomin.

Jul./18/2023 Our paper “ERA-BS” has been accepted by TC 2023!

Feb./24/2023 Our paper “HyperAttack” has been accepted by DAC 2023!

Nov./18/2022 Our paper “SIMSnn” has been accepted by DATE 2023!

Research

Current research interests focuses on:

  • LLMs/Neural Network Acceleration (大模型/神经网络训练、推理加速)
  • In-memory Computing (存内计算)
  • Brain-inspired Neuromorphic Computing (神经模态计算)
  • AI Compilation Optimization (AI编译优化)

Recruitment

Our team is seeking self-motivated PhD, Master and Undergraduate students who are interested in Computer Architecture, Efficient AI acceleration, and PIM Design. If you are interested, please email me your CV.

Recent Visits to this Site