Video April 14, 2026

UnTangled with Jen McFarland: What Does Sustainable AI Use Look Like?

UnTangled with Jen McFarland: What Does Sustainable AI Use Look Like?
Jen Frazier
(she/her/hers)
CEO & Founder
The Guilt Is Real. But Opting Out Is Not the Answer.
I have been in a lot of conversations lately where people who care deeply about the planet hear “AI” and immediately feel bad. Like they are personally responsible for every data center humming away somewhere. Like using a generative AI tool to help draft a grant report makes them hypocritical.
I get it. I really do.
But when I sat down with Jen McFarland, founder of Marit Digital and fellow B Corp, for last week’s episode of UnTangled, we got into something that I think reframes the whole conversation. Yes, AI has a real environmental footprint. No, that does not mean the answer is to log off and never look back.
The rules of this technology are being written right now. If the people who care about sustainability and ethics opt out, they are not staying clean. They are just letting someone else make the decisions.
What We Actually Know (And What We Don’t)

Here is the honest truth about the environmental impact of AI: the data is murky, and a lot of what you have heard is probably either overblown or undersourced.

Jen McFarland has done real research on this, and her main message is that most companies are hiding behind “proprietary” claims and not publishing meaningful data at all. Google and Mistral are two notable exceptions. They have both done some public accounting of what they call the “cradle to grave” cost of a prompt, from server hardware to training to inference to eventual disposal.

Is Google’s accounting perfect? Almost certainly not. Critics say they are underestimating. But Jen’s point was a good one: at least they are doing the work and sharing something. That kind of transparency deserves some credit, even if incomplete.

Meanwhile, I will just say it: Anthropic, the company behind Claude, which is the AI tool I am using as I write this, is one of the least transparent companies when it comes to environmental impact. It is also not green-hosted. Hi, Anthropic. We see you. This is exactly the kind of public accountability that can nudge big companies toward doing it better. So consider this my nudge.

The biggest energy problem right now is not even what most people think. It is not your prompts or my prompts. It is that data centers are being built faster than renewable energy infrastructure can support them. The bottleneck for AI is energy, and right now a lot of that energy is coming from the sources we have been trying to retire. Coal. Natural gas. All the fun ones.

And one more number to put this in perspective: crypto is still a massive consumer of data center energy. It is quickly being eclipsed by generative AI, but we are not having the same level of hand-wringing about crypto. Which is interesting.

The Golf Course Problem

You know what uses about 30 times more water than data centers?

Golf courses.

Golf courses are being built in deserts. Nobody is shutting them down. And yet, somehow, the outrage lands differently when we are talking about technology.

I am not saying AI’s footprint is nothing. I am saying let’s be honest about where we direct our energy and why. Gas cars are genuinely terrible for the environment and we have not opted out of those en masse either. We are working on better alternatives, pushing for better policy, making smarter individual choices where we can, and staying in the game.

That is exactly what sustainable AI use looks like too.

Token Maxing Is a Thing and It Is Absurd

One thing Jen brought up that made my jaw drop a little: there are companies right now with internal leaderboards tracking which employees use the most AI tokens in a day. Some are reportedly benchmarking employees against using AI worth the cost of their own salary. Every day.

This is called token maxing and it is, as Jen put it, pretty nauseating.

It is also almost perfectly designed to produce the worst possible outcomes. No strategic intent. No thoughtful prompting. Just volume for the sake of it. If you want to use AI in the least efficient, most wasteful, least effective way imaginable, that is how you do it.

This is a phase, hopefully. But it is a good reminder that how organizations decide to use these tools matters enormously, not just for outcomes, but for the actual environmental impact of the sector as a whole.

So What Can You Actually Do?

This is where Jen’s practical framing was really helpful. You do not have to be perfect. You just have to be intentional.

  1. Prompt smarter, not more. Think through what you need before you open the tool. Don’t run 15 image iterations when three would do. The environmental cost is in the volume of computing, and you can reduce that without reducing your results much at all.
  2. Build smaller, focused workflows. Custom prompts, prompt libraries, and purpose-built smaller models for specific tasks are more efficient than asking a general model to do everything. They also hallucinate less, which is a nice bonus.
  3. Look at your offset options. Tools like Ecolytics have browser extensions that can measure your AI usage and help you offset it. It is not a perfect solution, but it is something.
  4. Go beyond the big five. Jen recommends looking at Perplexity, which lets you choose your model and opt for shorter responses by default. Mistral is open source and EU-based, which means it operates under stronger regulatory constraints. GreenGPT is worth investigating if environmental alignment is a top priority for you. Change Agent was built by and specifically for the social sector and prioritizes correcting for the biases built into the internet writ large. Those differences matter.
  5. Ask harder questions of the tools you are already paying for. How are they hosted? What are they doing on renewable energy? How transparent are they about their footprint? These questions create pressure, and pressure works.

You can find a living, breathing resource document with more tools and research over at Jen’s site: maritdigital.com/resources

Your Organization Needs a Position on This

You probably have a values statement. You probably have a policy on travel, or procurement, or data privacy. This is the moment to add AI sustainability to that list.

Not because you will get it perfect. But because thinking it through, naming what matters to your organization, and making some intentional choices is far better than either token maxing or opting out.

If the people who care about these issues are at the table, we have a real chance to shape how these tools evolve. We can push companies toward transparency. We can model what responsible adoption looks like. We can build usage into our cultures in ways that align with our missions.

That only happens if we stay in the game.

Watch the full episode here:

and check out Jen McFarland’s full resources at maritdigital.com/resources.

If you are ready to get started on your AI journey at your organization, let’s talk!

Thanks! You’ll hear back within 48 business hours

In the meantime, why not check out our latest case study?

offCanvas Footer success image
A website that the MS community can rely on
Can Do MS

Whether you need help with a project, want to learn more about us, or just want to say hi, you’ve come to the right place.