Computer Science Department
School of Computer Science, Carnegie Mellon University


Optimal Social Decision Making

Nisarg Shah

August 2016

Ph.D. Thesis


Keywords: Algorithmic economics, Mechanism design, Fair division, Social choice, Maximum Nash welfare, Leximin, Dynamic fair division, Implicit utilitarian voting, Robust voting

How can computers help ordinary people make collective decisions about real-life dilemmas, like which restaurant to go to with friends, or even how to divide an inheritance? This requires the fusion of two fields: economics, which studies how people derive utility from such decisions and the incentives at play, and computer science, which informs the design of real-world computing systems that implement economic solutions. This fusion has already yielded a number of practical applications, from protection of infrastructure targets to exchange of organs.

We study two fields born of this fusion, namely computational fair division and computational social choice. Both aim to aid people in making optimal social decisions. In this thesis, we identify principled solution concepts in the two fields, and provide a comprehensive analysis of their guarantees. Our work underlies the design and implementation of the deployed service, which has already been used by tens of thousands of people, and of the upcoming service

We present our results in two parts. In Part I, we focus on fair division, which addresses the question of fairly dividing a set of resources among a set of people. We study two principled solution concepts from welfare economics – maximizing Nash welfare (Chapter 2) and maximizing egalitarian welfare (a.k.a. the leximin mechanism, Chapter 3), and identify broad fair division domains in which these solutions provide compelling guarantees. Maximizing Nash welfare has been deployed on Spliddit for dividing goods (such as jewelry and art), and the leximin mechanism has applications to allocating unused classrooms in the public school system in California. We also study a specialized domain in which the leximin mechanism has made significant impact: allocation of computational resources. We build upon existing work to incorporate practicalities of real-world systems such as indivisibilities (Chapter 4) and dynamics (Chapter 5), and design mechanisms (often variants of the leximin mechanism) that provide convincing guarantees.

In Part II, we focus on social choice theory, which addresses the question of aggregating heterogeneous preferences or opinions of a set of people towards a collective outcome. We study two different paradigms of this theory: aggregation of subjective preferences towards a social consensus, and aggregation of noisy estimates of an objective ground truth towards an accurate estimate of the ground truth. In the former paradigm, we advance the recently advocated implicit utilitarian approach (Chapter 6), and offer solutions for selecting a set of outcomes; this work has been deployed on RoboVote. In the latter paradigm, we generalize the prevalent maximum likelihood estimation approach, and propose the design of robust voting rules that are not tailored to restrictive assumptions (Chapter 7). We also design robust voting rules for the case where the opinions of the individuals are correlated via an underlying social network (Chapter 8). Finally, taking the robustness approach to the next level, we formulate the first worst-case approach to aggregating noisy votes (Chapter 9), which has a strong connection to error-correcting codes, and show that the worst-case optimal rules offer not only attractive theoretical guarantees, but also superior performance on real data.

283 pages

Thesis Committee:
Ariel D. Procaccia (Chair)
Nina Balcan
Avrim Blum
Tuomas Sandholm
Vincent Conitzer (Duke University)

Frank Pfenning, Head, Computer Science Department
Andrew W. Moore, Dean, School of Computer Science

Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by