|

Inquiry: Process Update 2

At this stage of my inquiry, I focused on understanding how health information spreads on social media and how this contributes to the risks of self-diagnosing online. The first article I examined focused on health information on TikTok, where young people increasingly use the platform as a search engine.  They found that there were hashtags such as  #celiactok, #diabetestok, and #sinustok that users use to find information on these diagnoses. 

The study focused on analyzing content on #sinustok. The researchers were interested in determining the quality of health information on TikTok. Researchers collected videos during a single 24-hour period to avoid algorithmic changes and categorized them by uploader type, content category, and content style. They then evaluated the videos’ quality using the Patient Education Materials Assessment Tool for Audiovisual Material, which assesses the understandability, actionability, and reliability of health information. The results were concerning. The researchers found that 44% of the videos contained misinformation. Videos posted by non-medical influencers (creators with more than 10,000 followers who did not identify as medical professionals) made up nearly half of the content. They were significantly more likely to contain misinformation. These videos also had lower quality scores overall. In contrast, videos created by healthcare professionals had more accurate information and scored higher on the measure of reliability and quality.

From this article, I learned that social media platforms rely heavily on recommender algorithms that prioritize content that is estimated to predict engagement. These algorithms analyze multiple signals, including similarities between users, features of the content, the user’s context (such as location or device), and past engagement patterns. For example, if users with similar interests frequently interact with certain health videos, the algorithm predicts that other users will also engage with that content and pushes it to their feeds.

To reflect on what I’ve researched so far, my partner, Parmis, decided to try using these platforms as if we were individuals looking for a diagnosis, and after reflecting on our individual experiences by answering some reflection questions.

TIK TOK

  1. How easy was it to find videos related to the symptoms I searched 
  2. Did the algorithm start showing me similar videos after my initial search
  3. Were any of the videos I found made by actual credited doctors or just influencers/ random users talking about their personal diagnosis or symptoms?  

CHATGPT

  1. Did the conversational format make the information feel more trustworthy? 

  2. GOOGLE
  3. Did the search results include critical medical sources or mostly general websites? 
  4. Did I feel overwhelmed by the amount of information available? 

My Answers:

TikTok: Very easy to find symptom-related videos. After one search, the algorithm showed similar content. However, most videos were from influencers or users sharing personal experiences, not accredited doctors, reducing reliability.

ChatGPT: The conversational format made the information feel more trustworthy and clear. It allowed for follow-up questions, but lacked visible sources to verify accuracy.

Google: Provided a mix of credible medical sources and general websites. While more evidence-based, the large amount of information felt overwhelming and required effort to evaluate reliability.

Leave a Reply