Publications

You can also find my articles on my Google Scholar profile.

{=} denotes equal contribution;   {*} denotes corresponding author
ISCA 2024
(Top Conf. in Computer Architecture)
Yilong Zhao, Mingyu Gao, Fangxin Liu*, Yiwei Hu, Zongwu Wang, Han Lin, Ji Li, He Xian, Hanlin Dong, Tao Yang, Naifeng Jing, Xiaoyao Liang, Li Jiang
UM-PIM: DRAM-based PIM with Uniform & Shared Memory Space (Acceptance Rate: 18%)
DAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Ning Yang=, Haomin Li, Zongwu Wang, Zhuoran Song, Songwen Pei, Li Jiang
INSPIRE: Accelerating Deep Neural Networks via Hardware-friendly Index-Pair Encoding (Acceptance Rate: 23%)
DAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Ning Yang=, Haomin Li, Zongwu Wang, Zhuoran Song, Songwen Pei, Li Jiang
EOS: An Energy-Oriented Attack Framework for Spiking Neural Networks (Acceptance Rate: 23%)
DATE 2024
(Top Conf. in Design Automation)
Jiahao Sun, Fangxin Liu=, Yijian Zhang, Li Jiang and Rui Yang
RTSA: An RRAM-TCAM based In-Memory-Search Accelerator for Sub-100 μs Collision Detection (Acceptance Rate: 24%)
ASPLOS 2024
(Top Conf. in Computer Architecture)
Zhuoran Song, Chunyu Qi, Fangxin Liu=, Naifeng Jing, Xiaoyao Liang
CMC: Video Transformer Acceleration via CODEC Assisted Matrix Condensing (Acceptance Rate: 24%)
HPCA 2024
(Top Conf. in Computer Architecture)
Fangxin Liu=, Ning Yang=, Haomin Li, Zongwu Wang, Zhuoran Song, Songwen Pei, Li Jiang
SPARK: Scalable and Precision-Aware Acceleration of Neural Networks via Efficient Encoding (Acceptance Rate: 18%)
ASPDAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Haomin Li=, Ning Yang, Yichi Chen, Zongwu Wang, Tao Yang, Li Jiang
PAAP-HD: PIM-Assisted Approximation for Efficient Hyper-Dimensional Computing (Acceptance Rate: 29%)
ASPDAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Haomin Li=, Ning Yang, Zongwu Wang, Tao Yang, Li Jiang
TEAS: Exploiting Spiking Activity for Temporal-wise Adaptive Spiking Neural Networks (Acceptance Rate: 29%)
ASPDAC 2024
(Top Conf. in Design Automation)
Shiyuan Huang=, Fangxin Liu=, Tian Li, Zongwu Wang, Haomin Li, Li Jiang
TSTC: Enabling Efficient Training via Structured Sparse Tensor Compilation (Acceptance Rate: 29%)
ASPDAC 2024
(Top Conf. in Design Automation)
Haomin Li=, Fangxin Liu=, Yichi Chen, Li Jiang
HyperFeel: An Efficient Federated Learning Framework Using Hyperdimensional Computing (Acceptance Rate: 29%)
ICCD 2023 Fangxin Liu=, Ning Yang=, Li Jiang
PSQ: An Automatic Search Framework for Data-Free Quantization on PIM-based Architecture (Acceptance Rate: 28%)
ICCAD 2023
(Top Conf. in Design Automation)
Haomin Li=, Fangxin Liu=, Yichi Chen, Li Jiang
HyperNode: An Efficient Node Classification Framework Using HyperDimensional Computing (Acceptance Rate: 23%)
IEEE TC 2023
(Top Journal in Computer Architecture)
Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Xiaoyao Liang, Li Jiang
ERA-BS: Boosting the Efficiency of ReRAM-based PIM Accelerator with Fine-Grained Bit-Level Sparsity (CCF Tier A)
DAC 2024
(Top Conf. in Design Automation)
Fangxin Liu=, Haomin Li=, Zongwu Wang, Yongbiao Chen, Li Jiang
HyperAttack: An Efficient Attack Framework for HyperDimensional Computing (Acceptance Rate: 23%)
ICCD 2022 Fangxin Liu, Zongwu Wang, Yongbiao Chen, Li Jiang
Randomize and Match: Exploiting Irregular Sparsity for Energy Efficient Processing in SNNs (Acceptance Rate: 24%)
IEEE TCAD 2022
(Top Journal in Computer-Aided Design)
Fangxin Liu, Zongwu Wang, Yongbiao Chen, Zhezhi He, Tao Yang, Xiaoyao Liang, Li Jiang
SoBS-X: Squeeze-Out Bit Sparsity for ReRAM-Crossbar-Based Neural Network Accelerator (CCF Tier A)
SIGIR 2022
(Top Conf. in Information Retrieval)
Fangxin Liu, Haomin Li, Xiaokang Yang, Li Jiang
L3E-HD: A Framework Enabling Efficient Ensemble in High-Dimensional Space for Language Tasks (Acceptance Rate: 24%)
IEEE TCAD 2022
(Top Journal in Computer-Aided Design)
Fangxin Liu=, Wenbo Zhao, Zongwu Wang, Yilong Zhao, Tao Yang, Yiran Chen, Li Jiang
IVQ: In-Memory Acceleration of DNN Inference Exploiting Varied Quantization (CCF Tier A)
DAC 2022
(Top Conf. in Design Automation)
Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Zhezhi He, Naifeng Jing, Xiaoyao Liang, Li Jiang
EBSP: Evolving Bit Sparsity Patterns for Hardware Friendly Inference of Quantized Deep Neural Networks (Acceptance Rate: 24.7%)
DAC 2022
(Top Conf. in Design Automation)
Fangxin Liu=, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Zhezhi He, Rui Yang, Qidong Tang, Tao Yang, Cheng Zhuo
PIM-DH: ReRAM based Processing in Memory Architecture for Deep Hashing Acceleration (Acceptance Rate: 24.7%)
DAC 2022
(Top Conf. in Design Automation)
Fangxin Liu, Wenbo Zhao, Zongwu Wang, Yongbiao Chen, Tao Yang, Zhezhi He, Xiaokang Yang, Li Jiang
SATO: Spiking Neural Network Acceleration via Temporal Oriented Dataflow and Architecture (Acceptance Rate: 24.7%)
ICASSP 2022
(Top Conf. in Signal Processing)
Fangxin Liu=, Wenbo Zhao, Yongbiao Chen, Zongwu Wang, Fei Dai
Dynsnn: A dynamic approach to reduce redundancy in spiking neural networks (CCF Tier B)
AAAI'22 (Oral)
(Top Conf. in Artificial Intelligence)
Fangxin Liu, Wenbo Zhao*, Yongbiao Chen, Zongwu Wang, Li Jiang
SpikeConverter: An Efficient Conversion Framework Zipping the Gap between Artificial Neural Networks and Spiking Neural Networks (Acceptance Rate: 15%)
Frontiers in Neuroscience, 2021
(SCI Tier 2)
Fangxin Liu=, Wenbo Zhao=, Yongbiao Chen, Zongwu Wang, Tao Yang, Li Jiang
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training (Impact Factor: 4.7)
ICCD 2021 Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao, Tao Yang, Jingnai Feng, Xiaoyao Liang, Li Jiang
SME: ReRAM-based Sparse-Multiplication-Engine to Squeeze-Out Bit Sparsity of Neural Network (Acceptance Rate: 24.4%)
ICCV 2021
(Top Conf. in Computer Vision)
Fangxin Liu=, Wenbo Zhao, Zhezhi He, Yanzhi Wang, Zongwu Wang, Changzhi Dai, Xiaoyao Liang, Li Jiang
Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point (Acceptance Rate: 25.9%)
ICCAD 2021
(Top Conf. in Design Automation)
Fangxin Liu, Wenbo Zhao, Zhezhi He, Zongwu Wang, Yilong Zhao, Yongbiao Chen, Li Jiang
Bit-Transformer: Transforming Bit-level Sparsity into Higher Preformance in ReRAM-based Accelerator (Acceptance Rate: 23.5%)
GLSVLSI 2021 Fangxin Liu=, Wenbo Zhao, Zongwu Wang, Tao Yang, Li Jiang
IM3A: Boosting Deep Neural Network Efficiency via In-Memory Addressing-Assisted Acceleration (Acceptance Rate: 24%)
ICMR 2022 Yongbiao Chen, Fangxin Liu, et al.
TransHash: Transformer-based Hamming Hashing for Efficient Image Retrieval
ICMR 2022 Yongbiao Chen, Fangxin Liu, et al.
Supervised Contrastive Vehicle Quantization for Efficient Vehicle Retrieval
DATE 2022
(Top Conf. in Design Automation)
Zongwu Wang, Fangxin Liu, et al.
Self-Terminated Write of Multi-Level Cell ReRAM for Efficient Neuromorphic Computing (Best Paper Award)
DATE 2022
(Top Conf. in Design Automation)
Tao Yang, Fangxin Liu, et al.
DTQAtten: Leveraging Dynamic Token-based Quantization for Efficient Attention Architecture (Nominated for Best Paper)
ASPDAC 2022
(Top Conf. in Design Automation)
Qidong Tang, Fangxin Liu, et al.
HAWIS: Hardware-Aware Automated WIdth Search for Accurate, Energy-Efficient and Robust Binary Neural Network on ReRAM Dot-Product Engine
DAC 2022
(Top Conf. in Design Automation)
Tao Yang, Fangxin Liu, et al.
PIMGCN: A ReRAM-based PIM Design for Graph Convolutional Network Acceleration