Cognitive Science Lunchtime Talk - Mike Morais

Thu, Dec 12, 2019, 12:00 pm to 1:00 pm
Location: 
Peretsman Scully Hall - Room 101
Speaker(s): 

Abstract: How do doctors choose diagnoses, bankers choose investments, judges choose verdicts, or lawmakers choose policies? All of these tasks demand distilling a massive body of information into discrete decisions, and for humans and machines alike approximate solutions are necessary. But what is the anatomy of a good approximation? Naively, we want an approximate solution that is “closest” to the true one, and machine learning offers a constellation of statistical definitions of closeness. Cognitive science and behavioral economics, however, offer a different definition: the best approximations yield the best decisions, i.e. the ones that minimize risk. This distinction becomes particularly important when (i) risks are significant and asymmetric — missing a cancer diagnosis is much worse than running unnecessary tests — and (ii) the worst outcomes are difficult to elicit — cancer diagnoses are infrequent. Such approximations in humans share a common feature: risk aversion, i.e. an overrepresentation of these costly infrequent outcomes, observable from high-level memory biases to low-level perceptual biases. Algorithmic approaches stand to benefit similarly from these cognitive biases, but how to embed them is less-explored. I will discuss a recent project called loss-calibrated expectation propagation, developing machine learning methods for this family of approximate decision problems. Algorithmically, this amounts to simultaneous approximate inference and risk minimization by partitioning the problem into manageable sub-problems (expectation propagation, Minka 2001) and oversensitizing the approximation to risk (loss-calibration, inversely utility-weighting, Lieder et al. 2017). I will focus on this model as a normative model of decision-making, and its implications for distributed learning.