Result: Scalable and modular scheduling

Title:
Scalable and modular scheduling
Authors:
Source:
Computer systems : architectures, modeling, and simulation (Samos, 21-23 July 2003 & 19-21 July 2004)Lecture notes in computer science. :433-442
Publisher Information:
Berlin: Springer, 2004.
Publication Year:
2004
Physical Description:
print, 17 ref
Original Material:
INIST-CNRS
Subject Terms:
Document Type:
Conference Conference Paper
File Description:
text
Language:
English
Author Affiliations:
LIP, Ecole Normale Superieure de Lyon, 69364 Lyon, France
ISSN:
0302-9743
Rights:
Copyright 2004 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Computer science; theoretical automation; systems

Electronics
Accession Number:
edscal.16075863
Database:
PASCAL Archive

Further Information

Scheduling a program (i.e. constructing a timetable for the execution of its operations) is one of the most powerful methods for automatic parallelization. A schedule gives a blueprint for constructing a synchronous program, suitable for an ASIC or a VLIW processor. However, constructing a schedule entails solving a large linear program. Even if one accepts the (experimental) fact that the Simplex is almost always polynomial, the scheduling time is of the order of a large power of the program size and of the maximum nesting level of its loops. Hence the method is not scalable. The present paper presents two methods for improving this situation. Firstly, a big program can be divided into smaller units (processes) which can be scheduled separately. This is modular scheduling. Second, one can use projection methods for solving linear programming problems incrementally. This is especially efficient if the dependence graph is sparse.