Skip to content



Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist provides a framework/checklist for evaluating GenAI in healthcare

TREGAI Github DocX checklist

Understanding the reasoning process of complex medical using counterfactual images and expert clinicians, transcending saliency maps.

The inferences of most machine-learning models powering medical artificial intelligence are difficult to interpret. Here we report a general framework for model auditing that combines insights from medical experts with a highly expressive form of explainable artificial intelligence. Specifically, we leveraged the expertise of dermatologists for the clinical task of differentiating melanomas from melanoma ā€˜lookalikesā€™ on the basis of dermoscopic and clinical images of the skin, and the power of generative models to render ā€˜counterfactualā€™ images to understand the ā€˜reasoningā€™ processes of five medical-image classifiers. By altering image attributes to produce analogous images that elicit a different prediction by the classifiers, and by asking physicians to identify medically meaningful features in the images, the counterfactual images revealed that the classifiers rely both on features used by human dermatologists, such as lesional pigmentation patterns, and on undesirable features, such as background skin texture and colour balance. The framework can be applied to any specialized medical domain to make the powerful inference processes of machine-learning models medically understandable.

Kinesiology - Motion GPT