Result: Message Passing Algorithm for the Generalized Assignment Problem

Title:
Message Passing Algorithm for the Generalized Assignment Problem
Contributors:
University of Illinois at Urbana-Champaign [Urbana] (UIUC), University of Illinois System, Ching-Hsien Hsu, Xuanhua Shi, Valentina Salapura, TC 10, WG 10.3
Source:
11th IFIP International Conference on Network and Parallel Computing (NPC). :423-434
Publisher Information:
HAL CCSD; Springer, 2014.
Publication Year:
2014
Collection:
collection:IFIP-LNCS
collection:IFIP
collection:IFIP-AICT
collection:IFIP-TC
collection:IFIP-LNCS-8707
collection:IFIP-TC10
collection:IFIP-NPC
collection:IFIP-WG10-3
Subject Geographic:
Original Identifier:
HAL: hal-01403111
Document Type:
Conference conferenceObject<br />Conference papers
Language:
English
Relation:
info:eu-repo/semantics/altIdentifier/doi/10.1007/978-3-662-44917-2_35
DOI:
10.1007/978-3-662-44917-2_35
Rights:
info:eu-repo/semantics/OpenAccess
URL: http://creativecommons.org/licenses/by/
Accession Number:
edshal.hal.01403111v1
Database:
HAL

Further Information

Part 4: Applications of Parallel and Distributed Computing
The generalized assignment problem (GAP) is NP-hard. It is even APX-hard to approximate it. The best known approximation algorithm is the LP-rounding algorithm in [1] with a $(1-\frac{1}{e})$ approximation ratio. We investigate the max-product belief propagation algorithm for the GAP, which is suitable for distributed implementation. The basic algorithm passes an exponential number of real-valued messages in each iteration. We show that the algorithm can be simplified so that only a linear number of real-valued messages are passed in each iteration. In particular, the computation of the messages from machines to jobs decomposes into two knapsack problems, which are also present in each iteration of the LP-rounding algorithm. The messages can be computed in parallel at each iteration. We observe that for small instances of GAP where the optimal solution can be computed, the message passing algorithm converges to the optimal solution when it is unique. We then show how to add small deterministic perturbations to ensure the uniqueness of the optimum. Finally, we prove GAP remains strongly NP-hard even if the optimum is unique.