Home › Essay Competition › Essay Competition Winners › 2026 Winning Essay – Daniel Zhang
2026 Winning Essay – Daniel Zhang
Table of Contents
Take a look at one of this year’s winning entries to the Immerse Education Essay Competition from the Artificial intelligence category.
Congratulations to all participants and in particular to those who have won 100% scholarships!
Machine Learning Fairness and the Measures Needed to Prevent AI Bias
by Daniel Z
In recent years, machine learning (ML) has become a foundational element in decision-making systems across sectors such as healthcare, finance, criminal justice, and education. While its efficiency and scalability have been widely celebrated, the growing integration of ML into high-stakes environments has surfaced a critical ethical challenge: fairness.¹ Far from being a purely technical concern, fairness in machine learning relates to the equitable treatment of individuals and social groups by algorithms, and it demands serious interdisciplinary attention.
The concept of fairness in ML is multifaceted. Scholars distinguish between principles such as demographic parity, which requires equal outcomes across groups, and equal opportunity, which ensures comparable true positive rates among qualified individuals.² However, these criteria are often mutually exclusive, and choosing between them involves complex trade-offs that are context-dependent.³ Without clearly defined fairness goals, models may perpetuate or exacerbate societal inequities.
This concern is not theoretical. In 2016, ProPublica reported on the COMPAS algorithm, used in the U.S. criminal justice system to assess recidivism risk. Black defendants were nearly twice as likely as white defendants to be falsely labeled “high-risk.”⁴ The source of this disparity lay in the training data, which reflected historically biased policing and sentencing patterns. Similarly, in 2018, Amazon discontinued an AI-based hiring tool after discovering it consistently downgraded applications from women.⁵ The model, trained on a decade of male-dominated hiring data, had learned to penalize terms like “women’s chess club” or references to women’s colleges.
Facial recognition technologies reveal another concerning pattern. A widely cited study by Buolamwini and Gebru (2018) demonstrated that commercial facial analysis systems misclassified darker-skinned women up to 34.7% of the time, compared to 0.8% for lighter-skinned men.⁶ These errors can have severe consequences in law enforcement, surveillance, and security contexts—domains in which accuracy is paramount.
To mitigate such risks, researchers have proposed several technical solutions. Reweighting and data augmentation are two strategies to address class imbalances in training data, ensuring more equitable representation of minority groups.⁷ In addition, fairness constraints can be embedded in model objectives to minimize discriminatory outcomes.⁸ Increasingly, explainable AI (XAI) techniques are being deployed to audit decision-making pathways and identify sources of hidden bias.⁹
Yet, technical solutions alone are insufficient. The biases that pervade ML systems often reflect structural inequities in society. For this reason, many scholars advocate for a sociotechnical approach: one that brings together ethicists, legal scholars, engineers, and affected communities.¹⁰ Legislative frameworks are also emerging to codify protections against algorithmic harm. The European Union’s proposed Artificial Intelligence Act, for instance, seeks to classify and regulate “high-risk” AI applications, including those that affect employment and civil liberties.¹¹
In conclusion, machine learning fairness must be treated not as an optional feature, but as a foundational design principle. The harms caused by biased algorithms—ranging from wrongful incarceration to employment discrimination—underscore the urgency of this issue. Building fair and accountable AI systems requires a dual commitment to rigorous technical methods and a deep understanding of social context. Only through such integrated efforts can we ensure that the rise of machine learning serves all members of society justly.
Bibliography
Mehrabi, N. et al. (2021). “”A Survey on Bias and Fairness in Machine Learning.”” ACM Computing Surveys, 54(6), pp. 1–35.
Hardt, M., Price, E., & Srebro, N. (2016). “”Equality of Opportunity in Supervised Learning.”” Advances in Neural Information Processing Systems, 29.
Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. fairmlbook.org.
Angwin, J. et al. (2016). “Machine Bias.” ProPublica.
Dastin, J. (2018). “Amazon Scraps Secret AI Recruiting Tool.” Reuters.
Buolamwini, J. & Gebru, T. (2018). “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” FAT, pp. 77–91.
Kamiran, F. & Calders, T. (2012). “”Data Preprocessing Techniques for Classification Without Discrimination.”” Knowledge and Information Systems, 33(1), pp. 1–33..
Zafar, M. B. et al. (2017). “”Fairness Constraints: Mechanisms for Fair Classification.”” AISTATS, pp. 962–970.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “”Why Should I Trust You?”” KDD, pp. 1135–1144.
Raji, I. D. et al. (2020). “”Closing the AI Accountability Gap.”” FAT, pp. 33–44.
European Commission. (2021). “Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act).””
Why Apply To The Immerse Education Essay Competition?
Are you a highly motivated student aged 13-18? Have you ever wanted to experience studying at one of the world’s top universities?
The Immerse Education essay competition gives you the chance to submit your own essay for the opportunity to be awarded a scholarship to attend one of our award-winning academic or career-based summer schools.
Fairness in AI is one of the biggest questions facing the future of technology. If you want to understand how machine learning systems make decisions, why bias can appear in algorithms, and what can be done to build more responsible tools, our artificial intelligence summer school could help you explore these ideas in depth. Our computer science summer school could also help you develop the technical foundations behind AI, from data and programming to model design and problem-solving. For students looking for a more flexible way to start exploring these topics, our online summer school offers the chance to learn from expert tutors without putting your current studies on pause.
How To Apply To The Immerse Education Essay Competition?
If you’re aged 13-18 and you’re interested in applying to the Immerse Education essay competition then please visit our essay competition page for more details.
Subscribe to the Immerse Education newsletter for £100 off your programme*
We will send you updates and the latest news about our company. Sign up for free by filling out the form.
