Treffer: (OUT-DATED)

Title:
(OUT-DATED)
Authors:
Publisher Information:
Anonymous
Publication Year:
2025
Collection:
Zenodo
Document Type:
Konferenz conference object
Language:
unknown
DOI:
10.5281/zenodo.15870922
Rights:
Creative Commons Attribution 4.0 International ; cc-by-4.0 ; https://creativecommons.org/licenses/by/4.0/legalcode
Accession Number:
edsbas.8A7EC6AF
Database:
BASE

Weitere Informationen

Counterfactual-Guided Explanation and Assertion-BasedCharacterization for CPS Debugging This repository contains a Python-based framework for the DeCaF tool (Counterfactual-Guided Explanation and Assertion-BasedCharacterization for CPS Debugging). The primary goal is to build machine learning models that can not only predict system behavior but also provide insights into the causal relationships between different system parameters. This is achieved by training various models, evaluating their performance, and generating hypothetical interventions (counterfactuals) to understand how changes in input variables affect system outcomes. Features Diverse Causal Modeling: Supports a range of machine learning models for causal inference, including: M5 Model Trees: A robust algorithm that combines decision trees with linear regression models at the leaves. RIPPER (Repeated Incremental Pruning to Produce Error Reduction): A rule-based learning algorithm that generates a set of IF-THEN rules. Random Forest: An ensemble learning method that constructs a multitude of decision trees. Standard Classifiers: Integrates with common scikit-learn classifiers like Support Vector Machines (SVC). Comprehensive Evaluation Framework: Provides a structured suite for evaluating model performance. It automates hyperparameter tuning, cross-validation, and calculates a wide array of metrics such as accuracy, precision, recall, F1-score, and AUC. Hypothetical Intervention (Counterfactual) Generation: Includes tools to generate counterfactual explanations using the dice-ml library. This helps in understanding what minimal changes to the input features would alter the model's prediction, providing deep insights into the system's behavior. Model Interpretability: Offers functionality to extract and visualize decision paths from tree-based models. This feature makes the complex logic of the models more transparent and understandable. Automated Workflow: The entire pipeline, from data loading and preprocessing to model training, evaluation, and ...