Result: Cache-conscious frequent pattern mining on modern and emerging processors

Title:
Cache-conscious frequent pattern mining on modern and emerging processors
Source:
Best papers of VLDB 2005The VLDB journal. 16(1):77-96
Publisher Information:
Heidelberg: Springer, 2007.
Publication Year:
2007
Physical Description:
print, 46 ref
Original Material:
INIST-CNRS
Subject Terms:
Computer science, Informatique, Sciences exactes et technologie, Exact sciences and technology, Sciences appliquees, Applied sciences, Informatique; automatique theorique; systemes, Computer science; control theory; systems, Logiciel, Software, Organisation des mémoires. Traitement des données, Memory organisation. Data processing, Traitement des données. Listes et chaînes de caractères, Data processing. List processing. Character string processing, Gestion des mémoires et des fichiers (y compris la protection et la sécurité des fichiers), Memory and file management (including protection and security), Systèmes d'information. Bases de données, Information systems. Data bases, Accès mémoire, Storage access, Acceso memoria, Analyse donnée, Data analysis, Análisis datos, Antémémoire, Cache memory, Antememoria, Association statistique, Statistical association, Asociación estadística, Base donnée très grande, Very large databases, Configuration symétrique, Symmetric configuration, Configuración simétrica, Extraction information, Information extraction, Extracción información, Fouille donnée, Data mining, Busca dato, Goulot étranglement, Bottleneck, Gollete estrangulamiento, Grosseur grain, Grain size, Grosor grano, Localité, Locality, Multiprocesseur, Multiprocessor, Multiprocesador, Multitâche, Multithread, Multitarea, Optimisation, Optimization, Optimización, Pavage, Tiling, Préchargement donnée, Prefetching, Precargamento dato, Structure donnée, Data structure, Estructura datos, Parallélisme instruction, Instruction level parallelism, Paralelismo instrucción, Architecture-conscious algorithms, Association rule mining, Cache-conscious data mining, Frequent itemset mining, Frequent pattern mining
Document Type:
Conference Conference Paper
File Description:
text
Language:
English
Author Affiliations:
Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210, United States
Applications Research Laboratory, Corporate Technology Group, Intel Corporation, Santa Clara, CA 95052, United States
ISSN:
1066-8888
Rights:
Copyright 2007 INIST-CNRS
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Notes:
Computer science; theoretical automation; systems
Accession Number:
edscal.18441898
Database:
PASCAL Archive

Further Information

Algorithms are typically designed to exploit the current state of the art in processor technology. However, as processor technology evolves, said algorithms are often unable to derive the maximum achievable performance on these modern architectures. In this paper, we examine the performance of frequent pattern mining algorithms on a modern processor. A detailed performance study reveals that even the best frequent pattern mining implementations, with highly efficient memory managers, still grossly under-utilize a modern processor. The primary performance bottlenecks are poor data locality and low instruction level parallelism (ILP). We propose a cache-conscious prefix tree to address this problem. The resulting tree improves spatial locality and also enhances the benefits from hardware cache line prefetching. Furthermore, the design of this data structure allows the use of path tiling, a novel tiling strategy, to improve temporal locality. The result is an overall speedup of up to 3.2 when compared with state of the art implementations. We then show how these algorithms can be improved further by realizing a non-naive thread-based decomposition that targets simultaneously multi-threaded processors (SMT). A key aspect of this decomposition is to ensure cache re-use between threads that are co-scheduled at a fine granularity. This optimization affords an additional speedup of 50%, resulting in an overall speedup of up to 4.8. The proposed optimizations also provide performance improvements on SMPs, and will most likely be beneficial on emerging processors.