A new study from our research network introduces an attention‑based xAI framework that classifies—and, more importantly, explains—multivariate data from wearables. By combining self‑attention with graph attention layers (GAT), the model captures both temporal patterns and cross‑sensor relationships, turning black‑box predictions into clinician‑ready insights.
Key Takeaways
- Self‑attention + GAT delivers modality‑ and time‑level explanations clinicians can trust.
- Outperforms a vanilla Transformer while remaining transparent—78.6 % accuracy (positive affect) and 76.3 % (negative affect) on a year‑long student dataset.
- Personalized, interpretable outputs pave the way for safer AI adoption in digital mental‑health care.
At Centralive, we empower researchers to collect rich wearable data and apply interpretable AI at scale—bridging the gap between cutting‑edge models and everyday clinical decision‑making.
Authors: Yuning Wang, Zhongqi Yang, Iman Azimi, Amir M. Rahmani, Pasi Liljeberg



