SysNoise
Exploring and Benchmarking Training-Deployment System Inconsistency

Extensive studies have shown that deep learning models are vulnerable to adversarial and natural noises, yet little is known about model robustness on noises caused by different system implementations. In this paper, we for the first time introduce SysNoise, a frequently occurred but often overlooked noise in the deep learning training-deployment cycle. In particular, SysNoise happens when the source training system switches to a disparate target system in deployments, where various tiny system mismatch adds up to a non-negligible difference. We first identify and classify SysNoise into three categories based on the inference stage; we then build a holistic benchmark to quantitatively measure the impact of SysNoise on 20+ models, comprehending image classification, object detection, and instance segmentation tasks. Our extensive experiments revealed that SysNoise could bring certain impacts on model robustness across different tasks and common mitigations like data augmentation and adversarial training show limited effects on it. Together, our findings open a new research topic and we hope this work will raise research attention to deep learning deployment systems accounting for model performance.

Leaderboard: Classification Task

1
1
1
1
Leaderboard: Object Detection Task

1
1
1
1
Leaderboard: Instance Segmentation Task

Dataset

Future Work

Based on the research conducted in this paper, our future work will focus on extending the SysNoise to other fields such as speech and audio. We will explore how SysNoise occurs in the different steps of the ML pipeline and benchmark it. We will keep updating our website and the final results will release on it. So far, we have found that model quantization will make an influence on the text to speech task. A preliminary speech example can be found at here, which contains some text-to-speech results with and without quantization noise. An example of how to visually see audio differences is provided here

Citation

Consider citing our work by following bibtex: }
@article{yan2022sysnoise,
    title={SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency},
    author={Yan Wang and Yuhang Li and Ruihao Gong and Aishan Liu and Yanfei Wang
    and Jian Hu and Yongqiang Yao and Tianzi Xiao
          and Fengwei Yu and Xianglong Liu},
    year={2022}
}

Contribute to Us

Welcome to provide us with test results for new models, as well as models that are robust to SysNoise. Contact me to have your model and results updated in the leaderboard. If you have other ideas about this noise, you can contact me at wangyan3@sensetime.com

© 2022, SysNoise, SenseTime;