In February we coded interviews from a NEUROSEC study which explored young people’s perspectives on genetic testing for Alzheimer’s disease. Specifically, we went through the interviews and coded for where the participant would like to be tested (doctor, home, research group), the social support the participant was interested in having with them (friend, family), who they would tell (friend, family) and who should have access to the results. It was really interesting to see the data and learn more about how qualitative data is coded!
We also met with some researchers from the BRC (Catherine Henshall and Claire Murray). Here we discussed their work (with adults) and gave our advice on why young people may be interested in participating in research, and ways to get more young people involved with public engagement and involvement across medical sciences. It was really great to learn more about what is happening around the university and with other advisory groups.
In this session we also discussed a case where Facebook has been using Artificial Intelligence (AI) to scan accounts for people at-risk of depression and self harm since 2017. These AI algorithms crunch huge amounts of data and look for patterns in users’ comments, likes, and even photo filters. While the specific details of the algorithm are undisclosed, if Facebook decides that someone is at “imminent risk” then they will alert local emergency responders – typically the police, who may visit the person for a wellness check. Last year Facebook alerted local emergency responders over 3,500 times. We answered questions such as: What are the ethical issues that arise from this case study? What are the benefits of this programme?