Skip to content
  • There are no suggestions because the search field is empty.

How do I measure the emissions from AI usage?

The usage of AI tools falls under Scope 3 emissions—specifically under Category 1: Purchased goods and services—even if you're not paying for them.

Measuring the emissions from your AI usage depends on how you're using AI (e.g. cloud-based APIs, on-premise models, training vs inference) and what data is available to you. Here’s a breakdown of how you can estimate emissions based on typical AI use cases:

1. Understand your AI usage

Start by defining:

  • Type of AI usage: Are you using:

    • Public APIs (e.g. OpenAI, Google Cloud, AWS, Azure)?

    • In-house infrastructure (e.g. training or running models on your own servers)?

  • Purpose: Inference (everyday prompts/queries) vs. model training (much more intensive)

  • Volume of usage: Number of API calls, compute hours, or GPU time.


2. Estimate emissions from cloud AI services 

If you're using a cloud-based AI provider, emissions will mostly come from compute (data centre energy use). You can estimate these using:

a) Provider-specific emissions data (if available)

Some AI providers offer dashboards or emissions estimates:

  • Microsoft Azure and Google Cloud provide carbon tracking dashboards for enterprise customers.

  • OpenAI currently does not offer public emissions data, but you can estimate based on usage and known compute requirements.

b) Activity based estimates (in absence of provider data)

Use publicly available estimates like:

Activity Emissions Estimate
1,000 ChatGPT prompts ~0.5–2 kg CO₂e (depending on model and server energy mix)
1 hour of GPU inference (A100) ~0.2–0.4 kg CO₂e
Training large LLM (like GPT-3) 500+ tonnes CO₂e (for context)

You can multiply the number of queries or compute hours by these factors to estimate emissions.

c) Average spend-based method

If you're using and paying for cloud based AI models (like ChatGPT), this spend will be converted into emissions using spend-based carbon accounting.

If your company and employees use free versions of these tools and your usage is material you should estimate emissions using one of the approached (a or b) above.


 3. Estimating emissions from in-house model use

If you're training or running models on your own infrastructure:

  • Track the hardware used (e.g. GPU type, CPU, memory)

  • Measure or estimate energy consumption (kWh) using tools like:

    • NVIDIA’s NVML tools or CodeCarbon library (for Python)

  • Multiply energy use by grid emissions factors (kg CO₂e/kWh for your electricity source)


4. Tools you can use

  • CodeCarbon (open-source Python package to track ML emissions)

  • ML CO2 Impact calculator by Hugging Face: https://mlco2.github.io/impact

  • Cloud provider emissions dashboards


5. Average emissions per $ spent on AI tools

The CO₂ emissions per dollar spent on AI tools can vary significantly depending on:

  • Whether you’re using API-based AI services (e.g. OpenAI, AWS, Google Cloud)

  • The type of AI task (simple inference vs complex model training)

  • The energy efficiency of the data centers used

  • The carbon intensity of the electricity grid powering those data centers

However, based on current industry data, we can estimate a rough average range:

Type of Use Estimated kg CO₂e / $USD
Inference (API use like ChatGPT, image generation) 0.1 – 0.5 kg CO₂e / $
Training or fine-tuning models 0.5 – 5+ kg CO₂e / $
General cloud AI services (e.g. AWS SageMaker, Azure ML) 0.2 – 1.0 kg CO₂e / $

Example calculation

Let’s say you spend $1,000/month on a mix of AI API usage:

  • At 0.3 kg CO₂e / $, that’s roughly 300 kg CO₂e per month or 3.6 tonnes CO₂e per year.

Important caveats:

  • This is a broad estimate. Actual values depend heavily on:

    • Compute intensity of the models (e.g. GPT-4 vs smaller models)

    • Provider efficiency and energy sourcing

    • Geographic location of data centers

  • Some providers (e.g. Google Cloud) have carbon-neutral data centers, which may reduce actual emissions significantly.

estimates derived from a combination of public research, academic studies, cloud provider disclosures, and industry tools. Here's a breakdown of the key sources behind them:

Key Sources for AI Emissions Factors

1. Academic Research

  • Patterson et al. (2021), Google Research"Carbon Emissions and Large Neural Network Training"

    • Link

    • Quantifies emissions for training large models on Google’s infrastructure.

    • Found training a large NLP model (like GPT-3 scale) could emit hundreds of tonnes of CO₂e.

  • Strubell et al. (2019), UMass Amherst"Energy and Policy Considerations for Deep Learning in NLP"

    • Link

    • Estimated emissions from training and fine-tuning NLP models; found large models could emit 300,000+ kg CO₂e for a single training run on U.S. average grid.

2. ML CO2 Impact Calculator (Hugging Face)

  • Website

    • Uses model training parameters (GPU hours, hardware, location) to estimate emissions.

    • Useful for DIY calculations if you’re training or hosting models.

3. CodeCarbon

  • Open-source tool developed by Mila Québec and BCG GAMMA.

  • Website

    • Tracks CO₂ emissions from ML experiments based on hardware, runtime, and power grid carbon intensity.

4. Cloud Provider Data

  • Microsoft Azure: Provides carbon tracking for enterprise customers.

  • Google Cloud: Operates with 100% renewable energy matching; publishes data center efficiency metrics (PUE).

  • AWS: Publishes a sustainability report, but detailed per-service emissions are still limited.

5. Third-party Industry Analyses

  • PwC, BCG, McKinsey have all published estimates on the carbon intensity of AI at scale (esp. in generative AI).

  • Example: BCG (2023) estimated that a single ChatGPT query can emit ~0.05g to 4g CO₂e, depending on the model and data center.

Why it’s hard to be precise

  • Most providers don’t yet disclose per-query or per-API emissions factors.

  • Energy use varies dramatically by:

    • Model type (GPT-4 vs LLaMA)

    • Location of compute (renewable-powered vs fossil-fuel grid)

    • Hardware (older vs newer GPUs)

    • Load balancing and data center efficiency (PUE)

What emissions factor does Trace use?

In the absence of a single, authorative estimate of emissions per $ cloud AI usage, Trace uses industry average emissions factors associated with software usage.

We are consistently monitoring the market for accurate and supplier specific emissions factors and happy to adjust this approach according to our customers' needs and usage profile.