Resource Allocation Among Agents with Preferences Induced by Factored MDPs

Dmitri A. Dolgov, and Edmund H. Durfee

In Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-06). Pages 297--304. May 2006.

Copyright © 2006 ACM. Publisher's version is available at http://doi.acm.org/10.1145/1160633.1160684.

Abstract
Distributing scarce resources among agents in a way that maximizes the social welfare of the group is a computationally hard problem when the value of a resource bundle is not linearly decomposable. Furthermore, the problem of determining the value of a resource bundle can be a significant computational challenge in itself, such as for an agent operating in a stochastic environment, where the value of a resource bundle is the expected payoff of the optimal policy realizable given these resources. Recent work has shown that the structure in agents' preferences induced by stochastic policy-optimization problems (modeled as MDPs) can be exploited to solve the resource-allocation and the policy-optimization problems simultaneously, leading to drastic (often exponential) improvements in computational efficiency. However, previous work used a flat MDP model that scales very poorly. In this work, we present and empirically evaluate a resource-allocation mechanism that achieves much better scaling by using factored MDP models, thus exploiting both the structure in agents' MDP-induced preferences, as well as the structure within agents' MDPs.


BibTex
@inproceedings{ dolgov06resourceFMDP,
   paperID   = "AAMAS-06",
   month     = "May",
   author    = "Dmitri A. Dolgov and Edmund H. Durfee",
   booktitle = "Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-06)",
   address   = "Hakodate, Japan",
   title     = "Resource Allocation Among Agents with Preferences Induced by Factored {MDPs}",
   pages     = "297--304",
   year      = "2006"
}


Download:
pdf [pdf]        ps [ps.gz]