Reflection 3: Equity in Digital Spaces

This week, we had the pleasure of hearing from Maha Bali, a professor at the American University in Cairo. I have always understood that there is inequity in digital spaces, but this talk showed me that this inequity is deeply ingrained in our society. Maha presented us with Nancy Fraser’s Framework, which splits digital inequality into 3 dimensions: 

  1. Economic inequality
  2. Cultural inequality 
  3. Political inequality  

All of these dimensions play a role in shaping how people experience these digital spaces, and we often see surface-level attempts at equity when tools that are meant to be “open” or accessible can still exclude certain voices. What really stood out for me was how algorithms can reinforce these biases. I may be biased myself for being interested in this aspect of this talk, as I’ve been doing research on algorithms for my inquiry. 

Something that really sat with me was the ways these algorithms and A. I may impact neurodivergent people.  Maha Bali pointed out that many AI systems are designed around what is considered “normal” behaviour, which often refers to neurotypical patterns. This becomes a problem in situations like online exam proctoring, where students who behave differently, such as those with ADHD, autism, or anxiety, may be flagged as suspicious because they do not align with what the system is trained to view as an “average” pattern. 

In the past, I’ve learned about various algorithmic biases, but inequity towards neurodivergent people due to algorithmic bias is a new issue for me to reflect on. This worries me for the future of education as our academic world becomes more and more digital, and with the incorporation of A. In educational settings, neurodivergent individuals could be at risk for major bias from these algorithms.  When A. I  is created in a biased manner that utilizes a narrow idea of what is considered “normal.” It can unintentionally harm individuals who don’t fit that definition, leading to further inequality and bias. This shows us just how important it is to design these algorithmic tools with inclusivity in mind. Developers and educators must consider diverse ways of thinking, learning, and interacting when implementing AI in educational settings. 

I found this really interesting article that talks more about this issue:

https://cdt.org/insights/how-automated-test-proctoring-software-discriminates-against-disable students/#:~:text=Because%20being%20disabled%20can%20affect,access%20to%20the%20video%20recording.

Leave a Reply