Pathikrit Basu

Our setting involves an analyst whose task is to estimate or learn an agent's preference based on data available on the agent's choices. A model of preferences is PAC learnable if the analyst can construct a learning rule to precisely learn the agent's preference with enough data. When a model is not learnable we interpret it as the model being susceptible to overfitting. PAC learnability is known to be characterized by the model's VC dimension: thus our paper takes the form of a study of the VC dimension of economic models of choice under uncertainty. We show that EU and CEU have finite VC dimension, and are consequently learnable. Morever, the sample complexity of the former is linear, and of the latter is exponential, in the number of states of uncertainty. The MEU model is learnable when there are two states but is not learnable when there are at least three states, in which case the VC dimension is infinite. Our results also exhibit a close relationship between learnability and the underlying axioms which characterise the model.

learnabilitypreference.pdf | |

File Size: | 183 kb |

File Type: |

private signals, as well as the actions of others. They commonly learn the true state (Cripps et. al., 2008) but do not coordinate in every equilibrium. We show that it is possible to construct equilibria in which players eventually coordinate on the correct action, for any discount factor. For high discount factors we show that in addition players can also achieve efficient payoffs.

repeatedcoordination.pdf | |

File Size: | 214 kb |

File Type: |

**Economics Working Paper No.110, Institute for Advanced Study, School of Social Science, Princeton, NJ **

**Abstract : **We study the process of decision-making and inference by a single, boundedly rational, economic agent. The agent chooses either a safe or a risky alternative in each period after receiving a signal about the state of the world in that period. The state of the world is changing according to a Markov process with some degree of persistence across time. The agent's decision rule is expressed as a finite-state automaton with a fixed number of memory states. Updating on the basis of the received signal is, for such an agent, making a transition from one state to another. The finiteness of the number of automaton states automatically suggests that beliefs are classified into categories and a signal causes a (possible) change in the category on the basis of which the next action is taken. The problem is one in partially-observed Markov decision processes (POMDP). We characterise the structure of the optimal decision rule in this setting and show how its properties pin down the categories of beliefs and explain some observed, seemingly irrational behaviour. We then specialise to a fixed state of the world, weaken the optimality requirement to admissibility and derive the staircase structure of the admissible automaton. Finally we examine the question of randomisation in the design of an automaton, propose a measure of the extent of such randomisation and show that there exists a minimal degree of randomisation for the set of automata implementing a given strategy. We show that if the number of signals is large, virtually no randomisation is required.

boundedmemory.pdf | |

File Size: | 314 kb |

File Type: |

bayesagm.pdf | |

File Size: | 216 kb |

File Type: |

dynbayesperspriv.pdf | |

File Size: | 461 kb |

File Type: |

ethnicconflicts.pdf | |

File Size: | 172 kb |

File Type: |

1) Similarity-based and Bayesian Decision Making : A Perspective (

2) On Alternative Approaches to SEU, MEU and Unambiguous Events

Powered by