top of page
  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
Search

Let’s Talk AI: A Primer for Sexual and Reproductive Health Leaders

  • lyndsay843
  • Jun 24
  • 4 min read

Welcome to The Body is the Interface, a reader-supported series exploring the intersection of artificial intelligence, equity, and care. Subscribe to receive future posts straight to your inbox.

Artificial intelligence is showing up everywhere—whether we realize it or not. From symptom checkers and chatbots to appointment reminders and predictive analytics, AI is already shaping how people access information, navigate care, and make decisions.

But in sexual and reproductive health (SRH), where trust, privacy, and lived experience are central, AI can feel more confusing than helpful. What does it actually do? Can it support your work without undermining it? And how can you make smart, ethical choices in a fast-changing tech landscape?

This post is your primer. No jargon, no hype—just what you need to know before your organization starts using (or expanding its use of) AI.

What AI Actually Is—and Isn’t

Artificial intelligence, or AI, refers to systems designed to do things that normally require human intelligence—like reading, writing, sorting, and predicting.

You’ve probably used AI already, even if you didn’t know it:

  • Google Translate

  • Netflix recommendations

  • Auto-suggested replies in Gmail

  • Online symptom checkers

Some of the most common tools in our world today—like ChatGPT—are called large language models (LLMs). These are trained on huge amounts of online text and can generate new content based on patterns they've seen.

Think of AI as a lightning-fast intern: it can summarize documents, help write plain-language health materials, or sort survey responses. But it doesn't think like a human. It doesn’t understand context, feelings, or power dynamics. That’s where you come in.

Two Common Myths, Debunked

1. “AI will replace our staff.”It won’t. AI can handle routine tasks like reminders or form processing, but it can’t build trust or offer trauma-informed care. That still takes people.

2. “AI always tells the truth.”It doesn’t. AI makes predictions based on what it's seen in its training data. Sometimes it's right. Sometimes it's dangerously wrong—and it won't tell you which is which.

How AI Works—and Where It Breaks Down

AI tools learn from massive datasets pulled from across the internet: research papers, Reddit, news articles, health blogs, and more. The patterns they find help them predict what word or phrase comes next.

But there are serious limitations:

  • It hallucinates. AI sometimes makes things up completely.

  • It lacks nuance. It doesn't understand tone, culture, or emotion.

  • It can reflect bias. If its data is biased, its responses will be too.

Why Human Oversight Matters

You can't just "set it and forget it." Every AI tool needs thoughtful human direction. That includes:

  • Writing clear, culturally competent prompts

  • Reviewing and verifying output

  • Understanding what it shouldn't be used for

AI is not plug-and-play. It’s co-pilot—not pilot.

Everyday Ways AI Can Support SRH Work

When used with care, AI can reduce admin burdens and extend your impact. For example:

  • Translate medical content into plain language

  • Automate text reminders, intake forms, and follow-ups

  • Generate first drafts of newsletters or grant reports

  • Sort open-ended survey responses into usable themes

  • Build chatbots that answer FAQs using vetted information

More Advanced (But Useful) Applications

For teams with a bit more capacity, AI can also help:

  • Identify emerging health trends across populations

  • Generate tailored messaging based on audience needs

  • Support clinical decision-making with quick access to care protocols

  • Build interactive training modules or simulations

These tools won’t replace your staff—but they might help you do more of what matters, more efficiently.

What AI Should Not Do in SRH

In our field, the risks are real. AI should never:

  • Provide medical advice without clinician review

  • Replace trauma-informed conversations

  • Collect or use personal data without full consent

  • Make care decisions without transparency or human appeal

The people most harmed by unethical AI are usually the ones already facing systemic barriers. Responsible use is non-negotiable.

Terms to Know

Here are a few terms that can help you navigate conversations about AI in your organization:

  • Bias: When AI reflects harmful stereotypes in its output

  • Algorithmic discrimination: When systems create unequal outcomes based on race, gender, disability, etc.

  • Data privacy: Especially critical post-Roe, where digital footprints can carry legal risk

  • Consent: People deserve to know when AI is used and how their data is handled

  • Explainability: If a system makes a decision, you should be able to explain how

Before You Start: Questions to Ask

If you're exploring AI tools, ask these questions first:

  • What real need are we solving?

  • Could this create harm—or risk trust?

  • Who’s making this decision? Who’s excluded?

  • Do we have the capacity to use this well?

Bring these questions into team meetings, RFPs, and program planning. They're not just tech questions—they’re equity questions.

Why This Moment Matters

AI is moving fast—and the people who are often left out of tech decisions are the ones who will feel the consequences most.

You don’t need to be a data scientist to shape this future. You just need to stay informed, grounded in your mission, and ready to ask the hard questions.

This series will walk with you through what’s next: case studies, toolkits, red flags to watch for, and stories of AI being used right. Together, we’ll imagine and build a future where tech doesn’t replace care—it helps us do more of it.

 
 
 

Recent Posts

See All

Comments


Let’s Work Together

Stay in Touch

2fa59af1-4b5a-4715-b95d-03bffdbf8116_edi
bottom of page