top of page

Computational Psychiatry Meets Algorithmic Fairness

  • Writer: Saige Rutherford
    Saige Rutherford
  • Jul 9
  • 3 min read

Updated: Jul 14

If you're reading this, chances are that you probably stopped by my poster at the Computational Psychiatry Conference in Tübingen. Thanks for taking the time to chat with me or read my poster, and sorry if I, for some reason, wasn't standing there when you stopped by (there were a lot of cool posters I wanted to check out).


I'll link my contact information here at the top for those who prefer to connect without reading my rambling philosophical thoughts. I am very passionate about the intersection of computational psychiatry and algorithmic fairness, and I would love to collaborate with others in this space. Please reach out!



Here is a digital version of my poster:

ree

Now, for a few more of my thoughts on the intersection of computational psychiatry and algorithmic fairness. My goal is to issue a call to action to the computational psychiatry community, urging us to acknowledge not only the potential benefits of ML/AI in psychiatry but also its harms. The problems and open questions in psychiatry will not be solved by considering only computational solutions. A critical perspective is necessary to ensure that we do not exacerbate mental health disparities and iatrogenic harms.


The algorithmic fairness community within the broader computer science/machine learning/artificial intelligence communities has been studying real-world algorithmic harms for some time now, and our computational community could learn a great deal from their foundational work. In turn, we can also share our knowledge with the algorithmic fairness community, which has not yet studied the overlap of the prison and mental healthcare systems and the resulting possible discrimination due to mental illness/disability.


I have been working in machine learning applied to psychiatric research (primarily neuroimaging data) since 2017. I have gone through the ups and downs, as well as numerous conversations, about the "real-world clinical utility" that this work provides. I see the argument from both sides; there has yet to be real-world clinical value from machine learning applications in psychiatry, and there is a lot of overpromising of technical solutions to complex societal problems. However, these are also the early days of working on a very complex issue (improving mental health care and understanding the brain across health and disease), and I'm not so quick to decide that there will never be real-world clinical utility.


Somewhere along the way of working in the ML for psychiatry domain, as I grew frustrated by limited success predicting mental health variables from neuroimaging data, I turned to different fields for inspiration. Because my background is more computational (in biophysics and computer science), I was drawn more to the machine learning community. I discovered the algorithmic fairness community and began reading numerous papers from the leading conference on algorithmic fairness (FAccT). Within this community, I found a diverse range of backgrounds, just like in neuroscience. Lawyers, sociologists, psychologists, computer scientists, and statisticians all contributed interesting perspectives on how to utilize AI for social good. I found it really refreshing how this community does not shy away from tackling really big philosophical societal problems in a single paper, like operationalizing power dynamics (https://doi.org/10.1145/3351095.3372859, https://dl.acm.org/doi/10.1145/3715275.3732144, https://doi.org/10.1145/3442188.3445897), as well as self-critical perspectives (https://doi.org/10.1145/3531146.3533157, https://doi.org/10.1145/3531146.3533241).


Science needs critical perspectives. Regarding psychiatry scientific research, I recommend the book critical conversations in psychiatry by Dr. Awais Aftab (although this is a bit of a tangent point as this book is not necessarily about computational psychiatry, rather just psychiatry… although obviously there is a connection between the two). Some of the other ideas from algorithmic fairness that I think would be helpful to bring into the computational psychiatry are relating to the causal impact of making machine learning predictions in a sensitive population. This is called performativity - some predictive systems do not merely predict, but their predictions shape and steer the world towards certain outcomes rather than others (https://doi.org/10.1145/3715275.3732072).


Another idea is regarding recourse, which relates to studying power dynamics. When such automated systems yield unfavorable decisions, it is imperative to allow for recourse by accompanying the decisions with recommendations that can help affected individuals to overturn them (https://arxiv.org/pdf/2002.06278.pdf). Finally, I think there is room to study how to build not just predictive models, but models that can help with clinical decision making (https://proceedings.mlr.press/v108/kilbertus20a.html), though this type of model must also be implemented with extreme caution. There is a lot of evidence how much clinicians overweight the recommendations of AI systems even when there is false information (https://doi.org/10.1145/3715275.3732121, http://arxiv.org/abs/2506.17163).


There is also a sub-community at the intersection of algorithmic fairness and machine learning for healthcare that studies clinician-AI teamwork (https://www.medrxiv.org/content/10.1101/2025.06.07.25329176v1, https://www.microsoft.com/en-us/research/uploads/prod/2021/01/Optimizing_AI_for_Teamwork.pdf).


Anyways, that is all of my thoughts for now. Lots more to come in this space and I will probably be updating the blog post as time goes on.   

 
 
 

Comments


bottom of page