Designing Access: The Body is the Interface
- lyndsay843
- Jun 18
- 4 min read
Technology in reproductive health isn't neutral. It's coded with the same systems of inequity we've fought offline.
By Lyndsay Sanborn
June 17, 2025
Technology Isn’t Neutral in Reproductive Health
Technology is not neutral, and in sexual and reproductive health, it never has been. It reflects the values of the people who build it, the biases in the data it learns from, and the inequities embedded in the health systems it draws from. In a field shaped by racism, misogyny, homophobia, and ableism, digital tools can either reinforce exclusion or become part of the solution.
Technology in reproductive health isn't neutral. It's coded with the same systems of exclusion we've fought offline.
The Cost of Digital Bias
"Minority and low-income women are underrepresented in healthcare datasets used to train AI, increasing the risk of biased outputs that can exacerbate existing health disparities."— PMC, 2024
When I say the body is the interface, I mean that digital decisions have real, physical consequences. They show up not just on screens, but in people’s lives—in missed diagnoses, harmful search results, and chatbots that can’t recognize context or care. It’s what happens when someone reaches out for information, and the algorithm responds with silence, or worse, with judgment.
When the body is the interface, bias cuts deeper. Harm finds home. And silence scales.
When the Algorithm Decides Who Matters
"Dermatology AI models perform 27–36% worse on darker skin tones, doubling the diagnostic gap between light and dark skin."— DDI Benchmark Study, 2022
In technology, an interface is the point where two systems meet and interact. It’s how a user connects with a machine—the keyboard, the touchscreen, the chat window. But with AI, the interface is more than a point of contact. It’s a filter shaped by the data it’s trained on. If that data erases, distorts, or omits the experiences of BIPOC, LGBTQ+, and disabled folks, or anyone pushed to the margins, then the interface becomes a gatekeeper. Biased data doesn’t just misrepresent, it re-creates exclusion. It encodes who is seen, who is heard, and who is helped.
In reproductive health, bad data can lead to search engines directing users to crisis pregnancy centers instead of legitimate abortion providers. It can surface judgmental, outdated content when someone looks for information about birth control or emergency contraception. AI tools might dismiss pain reported by Black women or fail to understand the needs of trans or nonbinary people. The interface isn’t just the device—it’s a reflection of someone’s body, identity, and lived experience, distorted by someone else’s assumptions and encoded into digital systems.
A Case Study in Surveillance and Discrimination
"In Argentina, an AI system designed to predict teenage pregnancies disproportionately targeted poor, Indigenous, and migrant girls, raising concerns about surveillance and rights violations."— WIRED, 2023
A provincial government in Salta, Argentina, collaborated with Microsoft in 2018 to deploy an AI tool called the "Technology Platform for Social Intervention." This system combined sensitive data—demographics, geography, disability, and home conditions—to predict which girls would likely become pregnant within five years. However, the AI disproportionately flagged poor, Indigenous, and migrant girls, prompting concerns that it functioned more as a surveillance mechanism than as a tool for support.
This data-driven profiling was not used to offer care, resources, or education. Instead, it created watchlists of girls without their knowledge or consent—placing them under increased scrutiny and stigmatization. Many of the girls flagged were not at actual risk but were included because they fit a profile derived from biased historical patterns, reinforcing structural inequality rather than addressing its root causes.
Social justice organizations raised alarms over its lack of transparency, consent, and context, arguing that it echoed patterns of control and discrimination deeply rooted in Argentina’s history. The case became emblematic of how AI systems, even when intended for social good, can perpetuate harm when divorced from lived experience and ethical design.
This case also reinforces a central truth of this work: the body is the interface. The consequences of biased algorithms don’t just live in data, they show up in the lives, identities, and opportunities of those most vulnerable. When prediction replaces care and oversight is absent, the body becomes the site of algorithmic harm. What we can learn from this case is clear: data alone cannot define risk, and prediction without purpose can quickly become surveillance. Ethical AI in reproductive health must start with consent, be guided by community voice, and be accountable to those most impacted. Otherwise, the same systems that promise care can become tools of harm.
Designing Toward Justice
This work takes time, reflection, and rigor. But when equity, access, autonomy, and lived experience guide our design, we’re not just building tools. We are confronting systemic oppression and creating pathways toward justice.
Because at the end of the day, it always comes back to this:
The body is the interface.
In future posts, I’ll explore how AI works (and how it doesn’t) , its current use cases in sexual and reproductive health, and how SRH organizations can begin to adopt AI in ethical and equity-centered ways. We’ll look at what it takes to build tools that serve rather than surveil, and how we can design a digital future that supports care, not control. From data bias to community design, this series will walk through the challenges and possibilities of applying AI to one of the most human parts of our lives: our health.
If you care about the future of reproductive justice, digital rights, and ethical innovation, I invite you to subscribe, share, and build with me.
Comments