Science

New framework increases transparency in decision-making systems

Explainable AI: New framework increases transparency in decision-making systems
In high-stakes situations like medical diagnostics, understanding why an AI model made a decision is as important as the decision itself. A new framework called Constrained Concept Refinement offers accurate, explainable predictions with low computational cost. Credit: ChatGPT image prompted by Salar Fattahi.

A new explainable AI technique transparently classifies images without compromising accuracy. The method, developed at the University of Michigan, opens up AI for situations where understanding why a decision was made is just as important as the decision itself, like medical diagnostics.

If an AI model flags a tumor as malignant without specifying what prompted the result—like size, shape or a shadow in the image—doctors cannot verify the result or explain it to the patient. Worse, the model may have picked up on misleading patterns in the data that humans would recognize as irrelevant.

“We need AI systems we can trust, especially in high-stakes areas like health care. If we don’t understand how a model makes decisions, we can’t safely rely on it. I want to help build AI that’s not only accurate, but also transparent and easy to interpret,” said Salar Fattahi, an assistant professor of industrial and operations engineering at U-M and senior author of the study to be presented the afternoon of July 17 at the International Conference on Machine Learning in Vancouver, British Columbia.

When classifying an image, AI models associate vectors of numbers with specific concepts. These number sets, called concept embeddings, can help AI locate things like “fracture,” “arthritis” or “healthy bone” in an X-ray. Explainable AI works to make concept embeddings interpretable—meaning a person can understand what the numbers represent and how they influence the model’s decisions.

Previous explainable AI methods add interpretability features after the model is already built. While these approaches can identify key factors that influenced model predictions, they counterintuitively are not explainable themselves. These models also treat concept embeddings as fixed numerical vectors, ignoring potential errors or misrepresentations inherent in them.

For instance, these models embed the concept of “healthy bone” using a pretrained multimodal model such as CLIP. Unlike carefully curated datasets, CLIP is trained on large-scale, noisy image-text pairs scraped from the internet. These pairs often include mislabeled data, vague descriptions or biologically incorrect associations, leading to inconsistencies in the resulting embeddings.

Published on the arXiv preprint server, the new framework—Constrained Concept Refinement or CCR—addresses the first problem by embedding and optimizing interpretability directly into the model’s architecture. It solves the second by introducing flexibility in concept embeddings, allowing them to adapt to the specific task at hand.

Explainable AI: New framework increases transparency in decision-making systems
The red arrows represent the backpropagation training process for classic explainable AI models. This paper extends the training process to refine concept embeddings with constraints on their deviation from initial embeddings, represented by green arrows and box. Credit: arXiv (2025). DOI: 10.48550/arxiv.2502.06775

Users can toggle the framework to favor interpretability, with more concept embedding restrictions, or accuracy by allowing concept embeddings to stray a bit more. This added flexibility allows the potentially inaccurate concept embedding of “healthy bone”—as obtained from CLIP—to be automatically adjusted and corrected by adapting to the available data. By leveraging this additional flexibility, the CCR approach can enhance both the interpretability and accuracy of the model.

“What surprised me most was realizing that interpretability doesn’t have to come at the cost of accuracy. In fact, with the right approach, it’s possible to achieve both—clear, explainable decisions and strong performance—in a simple and effective way,” said Fattahi.

CCR outperformed two explainable methods (CLIP-IP-OMP and label-free CBM) in prediction accuracy while preserving interpretability when tested on three image classification benchmarks (CIFAR10/100, Image Net, Places365). Importantly, the new method reduced runtime tenfold, offering better performance with lower computational cost.

“Although our current experiments focus on image classification, the method’s low implementation cost and ease of tuning suggest strong potential for broader applicability across diverse machine learning domains,” said Geyu Liang, a doctoral graduate of industrial and operations engineering at U-M and lead author of the study.

For instance, AI is increasingly integrated into who qualifies for loans, but without explainability, applicants are left in the dark when rejected. Explainable AI can increase transparency and fairness in finance, ensuring a decision was based on specific factors like income or credit history rather than biased or unrelated information.

“We’ve only scratched the surface. What excites me most is that our work offers strong evidence that explainability can be brought into modern AI in a surprisingly efficient and low-cost way,” said Fattahi.

More information:
Geyu Liang et al, Enhancing Performance of Explainable AI Models with Constrained Concept Refinement, arXiv (2025). DOI: 10.48550/arxiv.2502.06775

Journal information:
arXiv


Provided by
University of Michigan College of Engineering


Citation:
Explainable AI: New framework increases transparency in decision-making systems (2025, June 13)
retrieved 13 June 2025
from https://techxplore.com/news/2025-06-ai-framework-transparency-decision.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.

If you want to read more Like this articles, you can visit our Science category.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Please allow ads on our site

Please consider supporting us by disabling your ad blocker!