CMU-CS-09-111
Computer Science Department
School of Computer Science, Carnegie Mellon University



CMU-CS-09-111

Learning to Improve Negotiation in
Semi-Cooperative Agreement Problems

Elisabeth Crawford

May 2009

Ph.D. Thesis

CMU-CS-09-111.pdf


Keywords: Multiagent negotiation, multiagent learning, machine learning, personal assistant agents

In agreement problems, agents must assign options to variables, typically via negotiation, in order to gain reward. For an assignment to occur, agents with different individual preferences must agree on an option. We provide a formalism for agreement problems and negotiation, which encompasses important real-world problems faced by people daily, e.g., selecting a meeting time, deciding on a location for a group lunch, and assigning roles in a joint task.

We focus on the challenge of designing algorithms to improve agent negotiation. We formalize the negotiation process as a series of rounds, where the agents each make an offer until they reach agreement. The offers agents make, and hence the outcome of a negotiation, is influenced by the agents' negotiation strategies, preferences and utility functions. As an agent negotiates with other agents, it can keep a record of the offers the agents made, and the negotiation round in which they were made. We address the challenge of designing algorithms to improve agent negotiation, by providing learning algorithms, which agents can use to inform, and thereby improve, their negotiation. In particular, we show how an agent can learn complex models of its own user's preferences, by observing the outcomes of negotiations. We also address the problem of modeling the properties of other agents from negotiation histories. This problem is particularly challenging, because negotiation histories do not provide sufficient information to reconstruct a complete model of an agent. As such, we have developed a domain independent approach, to designing and learning abstractions of the properties of other agents. We demonstrate that an agent can learn these abstractions online while negotiating, and use the learned abstractions to increase the efficiency of its negotiation, without sacrificing its individual preferences. We also provide an algorithm for agents to model the reward they receive from different negotiation strategies. When an agent is faced with the problem of dealing with a wide variety of negotiation behavior in other agents, we show that the agent can improve its negotiation performance by using experts algorithms to adapt its selection of negotiation strategies according to their performance.

We observe that agreement problems are neither fully cooperative (agents have preferences), nor fully competitive (agents must agree to receive reward). We define semi-cooperative agents, to capture this space between cooperation and self-interest. The utility function of a semi-cooperative agent, trades-off preference satisfaction, and the time taken to reach agreement. The degree of the trade-off depends on the individual agent. We show how semi-cooperative agents can estimate the utility of different negotiation outcomes, by learning online about other agents' behavior. We also provide an in-depth analysis of two different strategies for using the learned information.

The approaches presented apply to a wide range of agreement problems. We provide analytical and experimental analysis to demonstrate their effectiveness in representative domains.

201 pages


Return to: SCS Technical Report Collection
School of Computer Science

This page maintained by reports@cs.cmu.edu