Can an AI model anticipate how well it will perform in the wild? > In many important applications, AI models are trained on labeled data but when deployed in the wild, labels are not readily available (for example in medical imaging where the model is identifying a cancerous patch, "ground-truth" labels may require expert examination). A critical question is -- in
adversarial Featured Robust Attribution Regularization Recent work on training neural networks to have robust attributions, to improve their trustworthiness and resilience to adversarial attacks.