meilin wrote:
okay, so i'm doing an honours programme with around 12-14 others and there was recently a meeting where we all had to present what ai-related new media project we've been working on during last semester. mine was on ai ethics/chatbots/psychiatry/differential diagnosis. basically: why some people are left outside of the traditional care, how using chatbots can put vulnerable people into an even more vulnerable position, ai hallucinations on psychiatric assessments and so on. i created 20+ pages on figma, a review of related papers, fictional patients based on real practice material for medical students etc. i genuinely put so much effort into it. ofc i did, it was a project that supposedly took an entire semester.
presented it on monday and ofc, i'm not the best presenter. i don't make eye contact with the audience bc it's too much and my voice is monotone af. but i still expected some engagement from the others, because a big part of my project was to raise awareness how privilege shapes who is able to receive support and to gain a more critical perspective on chatbots for risky areas, such as healthcare, beyond technical performance. my point was never to argue "chatbot good/bad", because i firmly believe people wouldn't use ai or non-ai bots for mental health support if real help was available/safe, yet this is exactly what my supervisor asked at the end. it kind of felt like she missed the point and it seemed like so did the audience. that, or the topic was just a bit uncomfortable bc the room was dead silent.
it made me a bit sad to see that no one checked my project afterwards (i can see the view counts) and that most people just vibe coded a 1-page website or smth like "top x most common ai myths". ofc, there were nicer projects too though (i rly liked the one where the person did a notebook on how to use a specific python package for LIF neurons). but idk, i thought since we're all doing ai honours, more people would be a bit more interested in critical aspects of it as well, especially since healthcare related bots seem to be gaining interest, yet imo this is precisely an area where we have to meet that interest with great caution.
(also a bit besides the point, but there are, or at least were when i was younger, mental health bots which teach cbt etc skills and use pre-defined answers from which the user can choose from, which would be a way to avoid many issues with bots like chatgpt, like privacy or unsafe suggestions)
okay, so i'm doing an honours programme with around 12-14 others and there was recently a meeting where we all had to present what ai-related new media project we've been working on during last semester. mine was on ai ethics/chatbots/psychiatry/differential diagnosis. basically: why some people are left outside of the traditional care, how using chatbots can put vulnerable people into an even more vulnerable position, ai hallucinations on psychiatric assessments and so on. i created 20+ pages on figma, a review of related papers, fictional patients based on real practice material for medical students etc. i genuinely put so much effort into it. ofc i did, it was a project that supposedly took an entire semester.
presented it on monday and ofc, i'm not the best presenter. i don't make eye contact with the audience bc it's too much and my voice is monotone af. but i still expected some engagement from the others, because a big part of my project was to raise awareness how privilege shapes who is able to receive support and to gain a more critical perspective on chatbots for risky areas, such as healthcare, beyond technical performance. my point was never to argue "chatbot good/bad", because i firmly believe people wouldn't use ai or non-ai bots for mental health support if real help was available/safe, yet this is exactly what my supervisor asked at the end. it kind of felt like she missed the point and it seemed like so did the audience. that, or the topic was just a bit uncomfortable bc the room was dead silent.
it made me a bit sad to see that no one checked my project afterwards (i can see the view counts) and that most people just vibe coded a 1-page website or smth like "top x most common ai myths". ofc, there were nicer projects too though (i rly liked the one where the person did a notebook on how to use a specific python package for LIF neurons). but idk, i thought since we're all doing ai honours, more people would be a bit more interested in critical aspects of it as well, especially since healthcare related bots seem to be gaining interest, yet imo this is precisely an area where we have to meet that interest with great caution.
(also a bit besides the point, but there are, or at least were when i was younger, mental health bots which teach cbt etc skills and use pre-defined answers from which the user can choose from, which would be a way to avoid many issues with bots like chatgpt, like privacy or unsafe suggestions)



0
0
0
0
To join the forums you need to be logged in.


18