Publications

You can also find my articles on my Google Scholar profile.

{=} denotes equal contribution;   {*} denotes corresponding author
HPCA 2025
(Top Conf. in Computer Architecture)
Fangxin Liu=, Shiyuan Huang=, Ning Yang, Zongwu Wang, Haomin Li, and Li Jiang
CROSS: Compiler-Driven Optimization of Sparse DNNs Using Sparse/Dense Computation Kernels (Acceptance Rate: 21%)
HPCA 2025
(Top Conf. in Computer Architecture)
Houshu he, Gang Li, Fangxin Liu, Li Jiang, Xiaoyang Liang, and Zhuoran Song
GSArch: Breaking Memory Barriers in 3D Guassian Splatting Training via Arcitectural Support (Acceptance Rate: 21%)
IEEE TCAS-AI 2024
(Imp. Jour. in Design Automation)
Ning Yang=, Fangxin Liu=, Zongwu Wang, Junping Zhao, Li Jiang
SearchQ: Search-based Fine-Grained Quantization for Data-Free Model Compression
IEEE TODAES 2024
(Imp. Jour. in Design Automation)
Shiyuan Huang=, Fangxin Liu=, Tian Li, Zongwu Wang, Ning Yang, Haomin Li and Li Jiang
STCO: Enhancing Training Efficiency via Structured Sparse Tensor Compilation Optimization (CCF Tier B)
ASP-DAC 2025
(Top Conf. in Design Automation)
Fangxin Liu=, Zongwu Wang=, Peng Xu, Shiyuan Huang and Li Jiang
Exploiting Differential-Based Data Encoding for Enhanced Query Efficiency (Acceptance Rate: 28%)
ASP-DAC 2025
(Top. Conf. in Design Automation)
Haomin Li=, Fangxin Liu=, Zewen Sun, Zongwu Wang, Shiyuan Huang, Ning Yang, and Li Jiang
NeuronQuant: Accurate and Efficient Post-Training Quantization for Spiking Neural Networks (Acceptance Rate: 28%)
IEEE TCAD 2024
(Top Journal in Computer-Aided Design)
Shiyuan Huang, Fangxin Liu*, Tao Yang, Zongwu Wang Ning Yang, and Li Jiang
SpMMPlu-Pro: An Enhanced Compiler Plug-In for Efficient SpMM and Sparsity Propagation Algorithm (CCF Tier A)
ICCD 2024
(Import. Conf. in Computer Architecture)
Fangxin Liu=, Ning Yang=, Zongwu Wang, Zhiyan Song, Tao Yang, and Li Jiang
TBUS: Taming Bipartite Unstructured Sparsity for Energy-Effcient DNN Acceleration (Acceptance Rate: 25%)
ICCD 2024
(Import. Conf. in Computer Architecture)
Fangxin Liu=, Ning Yang=,Zhiyan Song, Zongwu Wang and Li Jiang
HOLES: Boosting Large Language Models Efficiency with Hardware-friendly Lossless Encoding (Acceptance Rate: 25%)
ICCD 2024
(Import. Conf. in Computer Architecture)
Zongwu Wang=, Fangxin Liu=, and Li Jiang
PS4:A Low Power SNN Accelerator with Spike Speculative Scheme (Acceptance Rate: 25%)
ICCD 2024
(Import. Conf. in Computer Architecture)
Longyu Zhao, Zongwu Wang, Fangxin Liu*, and Li Jiang
Ninja: A Hardware Assisted System for Accelerating Nested Address Translation (Acceptance Rate: 25%)
MICRO 2024
(Top Conf. in Computer Architecture)
Zongwu Wang, Fangxin Liu*, Ning Yang, Shiyuan Huang, Haomin Li, and Li Jiang
COMPASS: SRAM-Based Computing-in-Memory SNN Accelerator with Adaptive Spike Speculation (Acceptance Rate: 22%)
MICRO 2024
(Top Conf. in Computer Architecture)
Zhuoran Song, Houshu He,Fangxin Liu*, Yifan Hao, Xinkai Song, Li Jiang and Xiaoyao Liang
SRender: Boosting Neural Radiance Field Efficiency via Sensitivity-Aware Dynamic Precision Rendering (Acceptance Rate: 22%)
IEEE TPDS 2024
(Top Journal in Computer Architecture)
Fangxin Liu, Zongwu Wang, Wenbo Zhao, Ning Yang, Yongbiao Chen, Shiyuan Huang, Haomin Li, Tao Yang, Songwen Pei,Xiaoyao Liang,and Li Jiang
Exploiting Temporal-Unrolled Parallelism for Energy-Efficient SNN Acceleration (CCF Tier A)
ISLPED 2024
(Top Conf. in Low Power Design)
Zongwu Wang, Fangxin Liu*, Longyu Zhao, Shiyuan Huang and Li Jiang
LowPASS: A Low power PIM-based accelerator with Speculative Scheme for SNNs (Acceptance Rate: 21%)
ISCA 2024
(Top Conf. in Computer Architecture)
Yilong Zhao, Mingyu Gao, Fangxin Liu*, Yiwei Hu, Zongwu Wang, Han Lin, Ji Li, He Xian, Hanlin Dong, Tao Yang, Naifeng Jing, Xiaoyao Liang, Li Jiang
UM-PIM: DRAM-based PIM with Uniform & Shared Memory Space (Acceptance Rate: 18%)
DAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Ning Yang=, Haomin Li, Zongwu Wang, Zhuoran Song, Songwen Pei, Li Jiang
INSPIRE: Accelerating Deep Neural Networks via Hardware-friendly Index-Pair Encoding (Acceptance Rate: 23%)
DAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Ning Yang=, Haomin Li, Zongwu Wang, Zhuoran Song, Songwen Pei, Li Jiang
EOS: An Energy-Oriented Attack Framework for Spiking Neural Networks (Acceptance Rate: 23%)
DATE 2024
(Top Conf. in Design Automation)
Jiahao Sun, Fangxin Liu=, Yijian Zhang, Li Jiang and Rui Yang
RTSA: An RRAM-TCAM based In-Memory-Search Accelerator for Sub-100 μs Collision Detection (Acceptance Rate: 24%)
ASPLOS 2024
(Top Conf. in Computer Architecture)
Zhuoran Song, Chunyu Qi, Fangxin Liu=, Naifeng Jing, Xiaoyao Liang
CMC: Video Transformer Acceleration via CODEC Assisted Matrix Condensing (Acceptance Rate: 24%)
HPCA 2024
(Top Conf. in Computer Architecture)
Fangxin Liu=, Ning Yang=, Haomin Li, Zongwu Wang, Zhuoran Song, Songwen Pei, Li Jiang
SPARK: Scalable and Precision-Aware Acceleration of Neural Networks via Efficient Encoding (Acceptance Rate: 18%)
ASPDAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Haomin Li=, Ning Yang, Yichi Chen, Zongwu Wang, Tao Yang, Li Jiang
PAAP-HD: PIM-Assisted Approximation for Efficient Hyper-Dimensional Computing (Acceptance Rate: 29%)
ASPDAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Haomin Li=, Ning Yang, Zongwu Wang, Tao Yang, Li Jiang
TEAS: Exploiting Spiking Activity for Temporal-wise Adaptive Spiking Neural Networks (Acceptance Rate: 29%)
ASPDAC 2024
(Top Conf. in Design Automation)
Shiyuan Huang=, Fangxin Liu=, Tian Li, Zongwu Wang, Haomin Li, Li Jiang
TSTC: Enabling Efficient Training via Structured Sparse Tensor Compilation (Acceptance Rate: 29%)
ASPDAC 2024
(Top Conf. in Design Automation)
Haomin Li=, Fangxin Liu=, Yichi Chen, Li Jiang
HyperFeel: An Efficient Federated Learning Framework Using Hyperdimensional Computing (Acceptance Rate: 29%)
ICCD 2023 Fangxin Liu=, Ning Yang=, Li Jiang
PSQ: An Automatic Search Framework for Data-Free Quantization on PIM-based Architecture (Acceptance Rate: 28%)
ICCAD 2023
(Top Conf. in Design Automation)
Haomin Li=, Fangxin Liu=, Yichi Chen, Li Jiang
HyperNode: An Efficient Node Classification Framework Using HyperDimensional Computing (Acceptance Rate: 23%)
IEEE TC 2023
(Top Journal in Computer Architecture)
Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Xiaoyao Liang, Li Jiang
ERA-BS: Boosting the Efficiency of ReRAM-based PIM Accelerator with Fine-Grained Bit-Level Sparsity (CCF Tier A)
DAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Haomin Li=, Zongwu Wang, Yongbiao Chen, Li Jiang
HyperAttack: An Efficient Attack Framework for HyperDimensional Computing (Acceptance Rate: 23%)
ICCD 2022 Fangxin Liu, Zongwu Wang, Yongbiao Chen, Li Jiang
Randomize and Match: Exploiting Irregular Sparsity for Energy Efficient Processing in SNNs (Acceptance Rate: 24%)
IEEE TCAD 2022
(Top Journal in Computer-Aided Design)
Fangxin Liu, Zongwu Wang, Yongbiao Chen, Zhezhi He, Tao Yang, Xiaoyao Liang, Li Jiang
SoBS-X: Squeeze-Out Bit Sparsity for ReRAM-Crossbar-Based Neural Network Accelerator (CCF Tier A)
SIGIR 2022
(Top Conf. in Information Retrieval)
Fangxin Liu, Haomin Li, Xiaokang Yang, Li Jiang
L3E-HD: A Framework Enabling Efficient Ensemble in High-Dimensional Space for Language Tasks (Acceptance Rate: 24%)
IEEE TCAD 2022
(Top Journal in Computer-Aided Design)
Fangxin Liu=, Wenbo Zhao, Zongwu Wang, Yilong Zhao, Tao Yang, Yiran Chen, Li Jiang
IVQ: In-Memory Acceleration of DNN Inference Exploiting Varied Quantization (CCF Tier A)
DAC 2022
(Top Conf. in Design Automation)
Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang
EBSP: Evolving Bit Sparsity Patterns for Hardware Friendly Inference of Quantized Deep Neural Networks (Acceptance Rate: 24.7%)
DAC 2022
(Top Conf. in Design Automation)
Fangxin Liu=, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Zhezhi He, Rui Yang, Qidong Tang, Tao Yang, Cheng Zhuo
PIM-DH: ReRAM based Processing in Memory Architecture for Deep Hashing Acceleration (Acceptance Rate: 24.7%)
DAC 2022
(Top Conf. in Design Automation)
Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Tao Yang, Zhezhi He, Xiaokang Yang, Li Jiang
SATO: Spiking Neural Network Acceleration via Temporal Oriented Dataflow and Architecture (Acceptance Rate: 24.7%)
ICASSP 2022
(Top Conf. in Signal Processing)
Fangxin Liu=, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Fei Dai
Dynsnn: A dynamic approach to reduce redundancy in spiking neural networks (CCF Tier B)
AAAI'22 (Oral)
(Top Conf. in Artificial Intelligence)
Fangxin Liu, Wenbo Zhao*, Yongbiao Chen, Zongwu Wang, Li Jiang
SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks (Acceptance Rate: 15%)
Frontiers in Neuroscience, 2021
(SCI Tier 2)
Fangxin Liu=, Wenbo Zhao=, Yongbiao Chen, Zongwu Wang, Tao Yang, Li Jiang
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training (Impact Factor: 4.7)
ICCD 2021 Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao, Tao Yang, Jingnai Feng, Xiaoyao Liang, Li Jiang
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network (Acceptance Rate: 24.4%)
ICCV 2021
(Top Conf. in Computer Vision)
Fangxin Liu=, Wenbo Zhao, Zhezhi He, Yanzhi Wang, Zongwu Wang, Changzhi Dai, Xiaoyao Liang, Li Jiang
Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point (Acceptance Rate: 25.9%)
ICCAD 2021
(Top Conf. in Design Automation)
Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao, Yongbiao Chen, Li Jiang
Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator (Acceptance Rate: 23.5%)
GLSVLSI 2021 Fangxin Liu=, Wenbo Zhao, Zongwu Wang, Tao Yang, Li Jiang
IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration (Acceptance Rate: 24%)
ICMR 2022 Yongbiao Chen, Fangxin Liu, et al.
TransHash: Transformer-based Hamming Hashing for Efficient Image Retrieval
ICMR 2022 Yongbiao Chen, Fangxin Liu, et al.
Supervised Contrastive Vehicle Quantization for Efficient Vehicle Retrieval
DATE 2022
(Top Conf. in Design Automation)
Zongwu Wang, Fangxin Liu, et al.
Self-Terminated Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing (Best Paper Award)
DATE 2022
(Top Conf. in Design Automation)
Tao Yang, Fangxin Liu, et al.
DTQAtten: Leveraging Dynamic Token-based Quantization for Efficient Attention Architecture (Nominated for Best Paper)
ASPDAC 2022
(Top Conf. in Design Automation)
Qidong Tang, Fangxin Liu, et al.
HAWIS: Hardware-Aware Automated WIdth Search for Accurate, Energy-Efficient and Robust Binary Neural Network on ReRAM Dot-Product Engine
DAC 2022
(Top Conf. in Design Automation)
Tao Yang, Fangxin Liu, et al.
PIMGCN: A ReRAM-based PIM Design for Graph Convolutional Network Acceleration