We propose a novel training method to integrate rules into deep learning, in
a way their strengths are controllable at inference. Deep Neural Networks with
Controllable Rule Representations (DeepCTRL) incorporates a rule encoder into
the model coupled with a rule-based objective, enabling a shared representation
for decision making. DeepCTRL is agnostic to data type and model architecture.
It can be applied to any kind of rule defined for inputs and outputs. The key
aspect of DeepCTRL is that it does not require retraining to adapt the rule
strength -- at inference, the user can adjust it based on the desired operation
point on accuracy vs. rule verification ratio. In real-world domains where
incorporating rules is critical -- such as Physics, Retail and Healthcare -- we
show the effectiveness of DeepCTRL in teaching rules for deep learning.
DeepCTRL improves the trust and reliability of the trained models by
significantly increasing their rule verification ratio, while also providing
accuracy gains at downstream tasks. Additionally, DeepCTRL enables novel use
cases such as hypothesis testing of the rules on data samples, and unsupervised
adaptation based on shared rules between datasets.