Unforeseen contingencies in Human and Machine Learning -- Theory and Experiments)

In standard models, agents are assumed to have an exogenously given correct model of the underlying uncertainty. In practice, the “correct model” is rarely known and has to be inferred from data. Both humans and AI-decisions makers may experience surprises and have to take into account “unknown unknowns”. This project studies the way in which humans and machines derive the relevant model of uncertainty from available data in view of possible unforeseen contingencies. The goals of the project are three-fold: derive and study an axiomatic model of decision making under uncertainty, applicable to both human and machine-learning. Use the model to generate behavioral and normative predictions, as well as to study learning and the value of information acquisition in such environments. Design an experiment to test the theoretical predictions using human subjects and AI-algorithms.