The role of women in fighting bias in data and AI models Par :Alana Reich July 15, 2025 Estimated reading time: 4 minutes. Why AI bias is a big deal As humans, we all have natural biases: whether based on our cultural upbringing or our unique preferences, it’s often true that it’s difficult for us to stay completely neutral. Keeping this idea in mind, there is a common misconception that technology and AI seemingly serve as the impartial antidote to human nuances. However, this is far from the truth; since we as humans train AI systems and feed them information, technology can inadvertently amplify our own biases even more. Although such prejudices are often unconscious and are more so a product of historical and societal inequalities, it’s nevertheless our ethical duty to ensure harmful biases are not perpetuated even further. Take, for example, facial recognition tools: studies have shown that facial recognition technology has a higher error rate for darker-skinned women when compared to lighter-skinned men. So, it’s clear that the stakes are high: when we turn a blind eye to AI bias, the risk is one of discrimination, systemic exclusion and societal harm. As technology continues to rapidly evolve, addressing algorithmic bias is no longer a technical nice-to-have but a moral duty. Where bias comes from To combat the appearance of bias in AI systems, it’s important to first understand how bias even makes its way into our technologies in the first place. As mentioned, it’s crucial to pinpoint that humans are the ones feeding data into AI models. Since we are inherently providing a skewed input, the output is bound to be skewed as well (i.e. “garbage in, garbage out”). For example, when it comes to hiring tools that automate the process of scanning resumes, algorithms could be trained based on existing data from predominantly male resumes. Because artificial intelligence works by identifying patterns, the model may be unknowingly penalizing resumes from women simply because it did not have the foresight to think of the full picture. Such biased training data, in turn, creates cyclical discrimination that, although unintended, opens the door to drastic measures of social inequality, particularly a gendered one. You may be thinking: how does this even happen? How does the training data become so biased in the first place? Well, it can often be due to a lack of gender diversity in tech and development teams in general. Homogeneous teams are more likely to miss potential harms and overlook ethical concerns, not because they do not care about the concerns of diverse voices, but simply because they do not have the perspective to question their own assumptions. This is exactly why having more women in artificial intelligence is not just about being equitable and inclusive, but is also a surefire way to improve the systems we build. Why women’s perspectives are crucial So, there is a silver lining: with the help of diverse perspectives in the workplace—especially those of women—we can train artificial intelligence to be better reflective of our modern world. There is immense value in prioritizing diversity in tech; for one, underrepresented voices can spot blind spots others may miss. By bringing their lived experiences to the table, marginalized women are uniquely positioned to identify potential areas of discrimination that might otherwise be overlooked. As well, women often bring an interdisciplinary perspective into a field that can be very logical. Whether it’s a background in ethics, humanities, or social sciences, having more women in data science brings an element of critical thinking to technology that we desperately need to build ethical AI models. Finally, there is also something to be said about intertwining empathy and human-centric design with tech. Since women are often known to be more emotionally attuned, they are known to expertly use their soft skills to advocate for the user experience. Women leading the charge There are many notable examples of inspiring female figures in the space of algorithmic fairness; women like Timnit Gebru, the former co-lead of Google’s ethical AI team, Joy Buolamwini, the founder of the Algorithmic Justice League, and Meredith Broussard, a data journalist and author of Artificial Unintelligence, are making waves to change way we look at data bias. Beyond individuals, there are also tons of initiatives that are actively working to break down barriers in ethical AI, including Women in AI, Black in AI, Woven (formerly Womxn in Data Science), and Data & Society. Kishawna Peck from Woven recently joined our Navigator Series here at Lighthouse Labs, where she highlighted that “approximately 23% of data roles are held by women”. It’s clear that there is so much work to be done to debias AI, and there’s no doubt that women are playing a pivotal role in the evolution of the field. How inclusion leads to better models Gender diversity and inclusion in tech isn’t just about representation: in fact, women are the key to fighting bias in algorithms and building fairer models. This is because diverse teams bring their unique backgrounds together to collaborate and problem-solve more efficiently. Once you have a well-rounded team, many practical steps can be taken to limit the presence of AI bias. For one, it’s important to ensure the data collection process is inclusive and uses datasets that are reflective of varied identities and lived experiences. Doing regular audits of data can also help minimize bias by spotting gaps quickly. Additionally, explainability tools can be leveraged to better explain and rationalize the outputs of AI models. Ultimately, though, advocacy plays a big part in shifting the culture of AI development as a whole. The only way to successfully implement the approaches listed above is to fundamentally change the perspective of how we as a society view artificial intelligence tools. If we continue to reiterate the harm of algorithmic unfairness, we can, in turn, stop bias in its tracks. Encouraging more women to join the field If you’re a woman intrigued by the idea of contributing to ethical innovation, now is the perfect time to consider a career in AI ethics, data science, and policy. Whether you already have some data knowledge up your sleeve or you’re completely new to the field, the good news is that it’s never been easier to gain the skills you need. With plenty of online learning resources available, including Lighthouse Lab’s Data Science Bootcamp, you can jumpstart your journey into AI in only a matter of weeks. Finally, beyond education, it is also important to factor in the role of mentorship, networks, and support systems. Connecting with like-minded people who are passionate about inclusive AI development can open doors to unique opportunities and spark worthwhile conversations. Consider browsing through LinkedIn or signing up for a tech event in your area to start building your professional circle. A call for inclusive innovation The need for diverse voices in AI has never been greater. Despite the rhetoric about the validity of diversity, equity and inclusion programs, now is the time to make structural and cultural changes that will ensure artificial intelligence does not perpetuate social inequities. Ultimately, the only way to move towards a solution is to take action: we must all continue to learn about AI bias, be mindful of it, and support those who are working tirelessly to advocate for algorithmic fairness and diversity in data science as a whole.