Out-of-distribution (OOD) detection has received much attention lately due to
its importance in the safe deployment of neural networks. One of the key
challenges is that models lack supervision signals from unknown data, and as a
result, can produce overconfident predictions on OOD data. Previous approaches
rely on real outlier datasets for model regularization, which can be costly and
sometimes infeasible to obtain in practice. In this paper, we present VOS, a
novel framework for OOD detection by adaptively synthesizing virtual outliers
that can meaningfully regularize the model's decision boundary during training.
Specifically, VOS samples virtual outliers from the low-likelihood region of
the class-conditional distribution estimated in the feature space. Alongside,
we introduce a novel unknown-aware training objective, which contrastively
shapes the uncertainty space between the ID data and synthesized outlier data.
VOS achieves state-of-the-art performance on both object detection and image
classification models, reducing the FPR95 by up to 7.87% compared to the
previous best method. Code is available at
https://github.com/deeplearning-wisc/vos.