Video May 13, 2026

UnTangled with Abby Lupi: Impact Happens at the Speed of Trust

UnTangled with Abby Lupi: Impact Happens at the Speed of Trust
Jen Frazier
(she/her/hers)
CEO & Founder
We are living in what I have started calling the semi-gelatinous moment of AI. Not liquid. Not solid. Definitely not concrete. We are swimming around in something that has not hardened yet, and honestly, that is a good thing. It means we can still shape it.
That is the frame I kept coming back to in last week’s UnTangled conversation with Abby Lupi, Research Manager at CareerVillage.org. Abby came onto my radar at the Good Tech Summit a few weeks back. You know those moments where you sit down next to someone and ten minutes later you are like, you are rad, you need to be on my show? That was this. And she delivered.
We got into some genuinely meaty territory around AI, trust, and what it means for the social sector to show up in this moment. Here is what stayed with me.
Trust Is Not a Soft Concept Right Now

Abby came in with a title that I could not have written better myself: impact happens at the speed of trust. She traces it back to the book of the same name, but she applied it in a way that really landed for me. In the social sector, everything we do runs on a complex network of relationships. The workforce program point of contact who talks to the educator who talks to the learner. None of that works if there is no baseline of trust at every link in the chain.

That is true for human relationships, and I would argue it is equally true for our relationships with the tools we are using.

Here is the thing we do not say out loud enough: we are placing an enormous amount of trust in a handful of very large, very powerful companies every single time we use their AI tools. We are trusting that they are not abusing their position. We are trusting that the data we feed them is being handled responsibly. We are trusting that the values baked into their systems are close enough to ours that the outputs are safe to use.

That is a lot of trust to extend to an entity you have never had a conversation with, and most of us do it without thinking twice.

Does Familiarity Equal Trustworthiness?

I want to challenge something. We tend to default to the biggest, most well-known AI platforms, and I think a lot of that comes down to a kind of shortcut: if it is well known, it must be reliable. But familiarity is not the same as trustworthiness. Popularity is not a values statement.

I have been sitting with a question I genuinely cannot answer yet: can I name a specific use case where a smaller, mission-aligned AI provider fell meaningfully short compared to one of the big five? Honestly, not really. And that makes me wonder how much of our tool loyalty is actually about performance, and how much is just proximity, ease, and a bit of laziness dressed up as thoughtfulness or strategy.

I say this as someone with a PC running Windows, several Google accounts, and an iPhone sitting right here on the table. I am not pretending to be above big tech. But AI might be the moment where we can draw a more intentional line. The tools are not yet so entrenched that switching costs are enormous. There are alternatives being built specifically for and by people who share our values. Organizations like Change Agent, Thaura, GreenPT, and Latimer exist and are doing serious work. The question is whether we will show up for them before we are locked in somewhere else.

Opting Out Is Not a Values Move. It Is a Power Move in the Wrong Direction.

Abby said something that I have been chewing on since we got off the call: if you have strong opinions about responsible AI use, but you keep removing yourself from the conversations where decisions are being made, you are shrinking your own power.

I believe this deeply. The jello has not hardened. We are still in a moment where the social sector’s voice, spending, and values can actually influence what gets built and how. If the entire social good sector collectively said we will not use platforms that are not meeting our standards around bias, data privacy, environmental impact, and equity, those companies would feel it. Maybe not immediately. But they would feel it. And that matters.

We are not without power here. Our dollars are a form of advocacy. The tools we choose are a form of values expression. The questions we ask of those tools and the companies behind them are a form of accountability. Asking your AI provider about their data practices, their environmental footprint, and their approach to bias correction are not annoying extra steps. They are exactly what responsible adoption looks like.

Get in. Be loud. Ask the hard questions. Do not wait for someone else to shape this.

Start With a Theory of Change, Not a Tool

One of the cleaner frameworks Abby brought to the conversation came straight from her work at CareerVillage. When they started exploring how to integrate AI, the first question was never what can this tool do? It was what problem are we trying to solve, and does this tool fit into our theory of change?

That order of operations matters a lot. AI in search of a problem is a recipe for wasted energy and potential harm. AI in service of a clearly defined problem is something very different.

The other piece of this that Abby named, and that I think gets overlooked, is the importance of human thinking as the prerequisite. She talked about how she approaches her own writing: pen and paper first, whiteboard, drafting, and sitting with the material for a few days. Then bringing AI in to help with structure, not to generate the thinking. The ideas stay hers. The voice stays hers. The tool helps map what she already knows into a form that is useful.

That is a model I think more of us should try. Do the hard thinking first. Use the tools to help with the parts where you hit a ceiling, not to replace the thinking altogether.

We Are All Figuring This Out Together. That Is the Point.

One of the things I appreciate most about Abby is that she works with this stuff every single day and she still says plainly: nobody has it figured out. Nobody. Not the researchers. Not the engineers at the big labs who are literally building this. We are all in a global phase of learning and discovery.

That means if you are walking into the AI space feeling like you are behind, you are not. You are in very good company.

Abby’s parting thought was a good one: be so comfortable and so confident saying I don’t know. Ask more questions. Have conversations. Build community. Do not self-isolate.

The social sector has always been powered by relationships. That is not a liability in this moment. It is our edge. Use it.

Ready to Get Into the Jello? We Can Help.

Firefly has been helping social good organizations adopt technology for nearly 20 years, and AI is the most powerful and consequential iteration of that work yet. We have launched AI Adoption services for nonprofits and social good organizations built around exactly the framework Abby described: start with the problem, not the tool. Your values lead. We help you figure out the rest.

The sector needs to be in this conversation while the jello is still moving. If you are ready to get off the sidelines, let’s talk.

Watch the full episode here and then, genuinely, let me know what you think. The jello is still moving and we need more people in the bowl!

Thanks! You’ll hear back within 48 business hours

In the meantime, why not check out our latest case study?

offCanvas Footer success image
A website that the MS community can rely on
Can Do MS

Whether you need help with a project, want to learn more about us, or just want to say hi, you’ve come to the right place.