Title: Learning Representations to Tackle Human Uncertainty
Committee:
Dr. AlRegib, Advisor
Dr. Heck, Chair
Dr. Dyer
Abstract: The objective of the proposed research is to tackle human uncertainty from two perspectives, intentional uncertainty, and accidental uncertainty, via learning model representations. Humans exhibit uncertainty that can be either intentional or accidental. Intentional human uncertainty can manifest into multi-modality in human behavior. Multi-modal nature is induced by individuals manipulating predictive outcomes of decision cues. Intentional human uncertainty is essential in safety-critical applications, where we aim to capture diverse modes of human behavior. Specifically, we develop a generative model that generates a distribution of intended goals, via variational inference in the latent manifold, to better capture human intentional uncertain behavior in test data. Besides intentional uncertainty, humans show accidental uncertainty when ambiguity in perception occurs. Perceptual ambiguity can lead to divergence between interpretations of data by multiple humans. This divergence manifests in the disagreement between human annotators during data labeling processes and causes undue effects on the model’s uncertainty and generalizability. We aim to mitigate the undue effects caused by accidental uncertainty. Specifically, we develop a label dilution scheme, by exploiting natural scene statistics, to alleviate the performance degradation while avoiding massive separate human annotations. In summary, we study and tackle human uncertainty from both intentional and accidental perspectives in this dissertation proposal.