Light Mode

Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

zysxmu/SARDFQ

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

1 Commit

Repository files navigation

Code for ICCV2025 paper 'Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers'

Environment Setup

conda create -n zsqvit python=3.8

pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html

pip install timm==0.4.12 IPython tqdm scipy matplotlib

Sample Synthesis

CUDA_VISIBLE_DEVICES=0 python -u generate.py [--model]
[--calib_batchsize] [--save_fake] [--softlabel] [--coe_sf] [--coe_attn]

optional arguments:
--model: Model architecture
--calib_batchsize: Total number of synthesized samples, e.g., 32 or 1024.
--save_fake: save fake data?
--coe_sf: weight of SL loss
--coe_attn: weight of the APA loss

Quantization

CUDA_VISIBLE_DEVICES=0 python -u test_quant.py [--model]
[--dataset] [--w_bit] [--a_bit] [--calib_batchsize]
[--iter] [--optim_batchsize] [--fake_path] [--rep] [--fake_path] [--box_path]
[--softtagets_path]

optional arguments:
--model: Model architecture
--dataset: Path to the ImageNet dataset
--w_bit: Bit-precision of weights
--a_bit: Bit-precision of activations
--optim_batchsize: Batch size per optimization iteration; varies with bit-width. Refer to PTQViT settings, see line 384 of test_quant.py
--rep: use reparameters or not (Please see RepQ-ViT and I\&s-vit)
--fake_path: Path to the synthesized samples
--box_path: Path to the boxes generated by MSR
--softtagets_path: Path to the softtarget generated by SL

Citation

We appreciate it if you would please cite our paper if you found the code/idea useful for your work:

@article{zhong2024semantics,
title={Semantics Prompting Data-Free Quantization for Low-Bit Vision Transformers},
author={Zhong, Yunshan and Zhou, Yuyao and Zhang, Yuxin and Li, Shen and Li, Yong and Chao, Fei and Zeng, Zhanpeng and Ji, Rongrong},
booktitle={International Conference on Computer Vision},
year={2025}
}

Acknowledgments

Our code is based on the opensource code listed in the following. We highly appreciate their contribution!

@inproceedings{li2022psaqvit,
title={Patch Similarity Aware Data-Free Quantization for Vision Transformers},
author={Li, Zhikai and Ma, Liping and Chen, Mengjuan and Xiao, Junrui and Gu, Qingyi},
booktitle={European Conference on Computer Vision},
pages={154--170},
year={2022}
}

@inproceedings{li2023repq,
title={Repq-vit: Scale reparameterization for post-training quantization of vision transformers},
author={Li, Zhikai and Xiao, Junrui and Yang, Lianwei and Gu, Qingyi},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={17227--17236},
year={2023}
}

@article{zhong2023s,
title={I\&s-vit: An inclusive \& stable method for pushing the limit of post-training vits quantization},
author={Zhong, Yunshan and Hu, Jiawei and Lin, Mingbao and Chen, Mengzhao and Ji, Rongrong},
journal={arXiv preprint arXiv:2311.10126},
year={2023}
}

About

Code for ICCV2025 paper 'Semantic Alignment and Reinforcement for Data-Free Quantization of Vision Transformers'

Topics

Resources

Readme

Stars

Watchers

Forks

Releases

No releases published

Packages

Contributors

Languages