Extensive studies have shown that deep learning models are vulnerable to adversarial and natural noises,
yet little is known about model robustness on noises caused by different system implementations. In this
paper, we for the first time introduce SysNoise, a frequently occurred but often overlooked noise in the
deep learning training-deployment cycle. In particular, SysNoise happens when the source training system
switches to a disparate target system in deployments, where various tiny system mismatch adds up to a
non-negligible difference. We first identify and classify SysNoise into three categories based on the
inference stage; we then build a holistic benchmark to quantitatively measure the impact of SysNoise on
20+ models, comprehending image classification, object detection, and instance segmentation tasks. Our
extensive experiments revealed that SysNoise could bring certain impacts on model robustness across different
tasks and common mitigations like data augmentation and adversarial training show limited effects on it.
Together, our findings open a new research topic and we hope this work will raise research attention to deep
learning deployment systems accounting for model performance.
Leaderboard: Classification Task
Dataset
Future Work
Citation
@article{yan2022sysnoise, title={SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency}, author={Yan Wang and Yuhang Li and Ruihao Gong and Aishan Liu and Yanfei Wang and Jian Hu and Yongqiang Yao and Tianzi Xiao and Fengwei Yu and Xianglong Liu}, year={2022} }
Contribute to Us